TWI411425B - Used in the monitoring of the eye to identify the method - Google Patents

Used in the monitoring of the eye to identify the method Download PDF

Info

Publication number
TWI411425B
TWI411425B TW96133980A TW96133980A TWI411425B TW I411425 B TWI411425 B TW I411425B TW 96133980 A TW96133980 A TW 96133980A TW 96133980 A TW96133980 A TW 96133980A TW I411425 B TWI411425 B TW I411425B
Authority
TW
Taiwan
Prior art keywords
eye
tracking
image
area
region
Prior art date
Application number
TW96133980A
Other languages
Chinese (zh)
Other versions
TW200911195A (en
Original Assignee
Automotive Res & Testing Ct
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Automotive Res & Testing Ct filed Critical Automotive Res & Testing Ct
Priority to TW96133980A priority Critical patent/TWI411425B/en
Publication of TW200911195A publication Critical patent/TW200911195A/en
Application granted granted Critical
Publication of TWI411425B publication Critical patent/TWI411425B/en

Links

Landscapes

  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present invention discloses an eye recognition/tracing method applied in personnel supervision, and an apparatus. This apparatus includes the IR illuminator, camera, filter and processing unit. The method mainly includes the pre-treatment process, the monitoring mode process and the tracing mode process. The pre-treatment process comprises the image-catching step and the image-sequencing step so as to select one picture from the vision information obtained by using IR source. The monitoring mode process consists of the step of searching candidate region, the step of analyzing candidate region and the step of confirming eye region so as to find out the eye region from the picture and analyze it. The tracing mode process is composed of the eye region tracing step and the tracing/judgment step so as to evaluate whether the eye region need to be re-positioned. By said processes and steps, this method can achieve the purpose of positioning eyes simply and certainly.

Description

應用於人員監控之眼睛辨識追踨方法Eye recognition tracking method applied to personnel monitoring

本發明係有關於一種應用於人員監控之眼睛辨識追踨方法,詳而言之係關於一偵測與追蹤眼睛之影像處理技術方法與裝置。The present invention relates to an eye recognition tracking method for personnel monitoring, and more particularly to an image processing technique and apparatus for detecting and tracking an eye.

利用處理單元對於一連續之影像各項特徵與細節做分析比對,藉由預設的流程與分析方式,進而由該影像上獲取所需資料,以用做自動控制等流程的一環,其中對於人體臉部影像的處理判斷方式,因具有較廣的應用範疇,遂成一熱門且重要的技術;人體臉部影像的處理判斷方式,主要係可應用在駕駛人員的眼睛追蹤與警示,防止疲勞開車所造成之危害,當然並不僅止於此,其它各式人員的監控與警示,亦可配合此一技術而收到自動化成效,由此可知影像追蹤之重要性與實用性。The processing unit compares the features and details of a continuous image, and obtains the required data from the image by using a preset process and analysis method, so as to be used as a part of the automatic control process, wherein The method of judging the face image of the human body has become a popular and important technology because of its wide application range. The processing and judgment method of the human face image is mainly applied to the driver's eye tracking and warning to prevent fatigue driving. The harm caused by this, of course, does not stop there. The monitoring and warning of other people can also receive automation results in conjunction with this technology, so that the importance and practicality of image tracking can be known.

習用技術中華民國專利公開號436436號,持續監控車輛駕駛者之方法及裝置,其技術在於使用視訊同質二連續視訊之移動圖素分析,偵測駕駛者水平移動,令駕駛者面部置中於視訊持續對應視框內,由同質視訊二連續視框間移動圖素分析,偵測駕駛者輕置中之面部垂直位移,以找出眼睛位置;習用技術中華民國專利公開號531402號,眼部追蹤方法,其技術為預先擷取虹膜資訊後與所擷取之虹膜資訊捲積,找出虹膜位置;習用技術中華民國專利公證書號I225222號,即時影像串之穩健臉部偵測演算法,其技術在顧接收面部影像之YCbCR色彩空間之影 像資料,合併運動進行分析判別眼睛可能位置;習用技術中華民國專利公開號200719247號,控制顯示裝置之眼睛偵測系統,其技術在於擷取後之臉部影像與數據庫中預設臉部數據比對,藉以辨識眼睛位置;習用技術中華民國專利公開號200719871號,適用於複雜背影的即時人臉偵測方法,其技術在於接收面部影像之YCbCR色彩空間之影像資料,合併支援向量演算進行分析判別眼睛可能位置;由上述各習用技術中可知,對於人體臉部而言,眼睛似成為一最主要之辨識目標,然而習用技術中之辨識方法仍流於繁瑣,或有需要擷取臉部影像、虹膜資訊等,或有需要配合YCbCr技術等,各先前技術皆未能提供出一簡單快速之眼睛追蹤方式,如此不僅需要較高昂之硬體設備方可配合習用技術,或是需要先擷取使用者自身之影像才可使用,相對而言,其便利性與適用性皆不佳;再加上習用各種技術其追蹤眼睛之方式,容易受限於天候、環境亮度以及使用者身上配帶物(如眼鏡、墨鏡等),難以有效達到追蹤之成效與準確度,因此就整體而言,其可說是相當的不具有實用性,仍有加以改良之必要。The technology of the Republic of China Patent No. 436436, the method and device for continuously monitoring the driver of the vehicle, the technology is to use the mobile pixel analysis of the video homogenous two continuous video to detect the horizontal movement of the driver and to make the driver's face centered in the video. In the continuous correspondence frame, the moving pixel analysis between the continuous video frames of the homogenous video is used to detect the vertical displacement of the face of the driver in the light position to find the eye position; the conventional technology of the Republic of China Patent Publication No. 531402, eye tracking The method is that the iris information is pre-captured and the convolved iris information is convolved to find the iris position; the conventional technology of the Republic of China Patent No. I225222, the instant image detection algorithm of the instant image string, the technology thereof In the YCbCR color space that receives facial images Image data, combined motion analysis to determine the possible position of the eye; conventional technology Republic of China Patent Publication No. 200719247, the eye detection system for controlling the display device, the technology is based on the face image after the capture and the preset face data in the database Yes, to identify the position of the eye; the conventional technology of the Republic of China Patent Publication No. 200719871, is applicable to the instant face detection method of complex back, the technology is to receive the image data of the YCbCR color space of the facial image, and combine the support vector calculus for analysis and discrimination. The possible position of the eye; as can be seen from the above-mentioned various techniques, the eye seems to be the most important target for the human face, but the identification method in the conventional technique is still cumbersome, or it is necessary to capture the facial image, Iris information, etc., or need to cooperate with YCbCr technology, etc., each of the prior art has failed to provide a simple and fast eye tracking method, so that not only the higher hardware equipment can be used with the conventional technology, but also needs to be used first. The image of the person can be used, relatively speaking, its convenience and applicability are not Good; coupled with the use of various techniques to track the eyes, it is easy to be limited by weather, ambient brightness and the user's body (such as glasses, sunglasses, etc.), it is difficult to effectively achieve the effectiveness and accuracy of tracking, so the overall In terms of it, it can be said that it is quite unpractical and still needs to be improved.

本發明之主要目的係在於提供一種應用於人員監控之眼睛辨識追踨方法,其可改善習用之追蹤辨識方式過於繁瑣、硬體限制較高、使用不便、易受限於天候與使用者配帶物等缺失。The main object of the present invention is to provide an eye recognition tracking method for personnel monitoring, which can improve the conventional tracking and identification method, is too cumbersome, has high hardware limitation, is inconvenient to use, and is easily restricted to the weather and the user. Things are missing.

為達成前述之發明目的,其裝置係包括紅外線投光器、攝影機、濾片、處理單元,而其方法係包括前置處理流程、偵測模式流程,與追蹤模式流程,其中:前置處理流程包括擷取影像步驟:配合紅外線光源而擷取紅外線影像做為視訊來源;影像序列步驟:將所擷取之視訊拆解成數張即時畫面,以做後續處理;偵測模式流程包括搜尋候選區域步驟:將上述所擷取之畫面與樣版比較挑選出可能為眼睛之候選區域;分析候選區域步驟:由候選區域中利用群組性與幾何關係確認眼睛區域;確認眼睛區域步驟:利用真實眼睛樣版做比對,以提高準確性;追蹤模式流程包括眼睛區域追蹤步驟:利用物件追蹤方式追蹤眼睛區域;追蹤判定步驟:若所選擇之眼睛區域無法追蹤,則由擷取影像步驟重新開始;若所選擇之眼睛區域可追蹤,則由擷取影像步驟重新開關,但略過搜尋候選區域步驟以及分析候選區域步驟;由上所述之方法,以紅外線影像做為視訊來源,而可全天候使用,並防止由使用者所配帶之眼鏡、墨鏡造成影響,且偵測模式流程係針對單一畫面做處理,故較節省硬體資源,亦可加快處理速度,追蹤模式流程則係在追蹤成功時可略過偵測模式流程,如此亦有利於處理單元做處理,因此本創作實是一種相當具有實用性及進步性之創作,值得產業界來推廣,並公諸於社會大眾。In order to achieve the foregoing object, the device includes an infrared ray projector, a camera, a filter, and a processing unit, and the method includes a pre-processing flow, a detection mode flow, and a tracking mode flow, wherein: the pre-processing flow includes: Take image step: take infrared image as the source of video with infrared light source; image sequence step: disassemble the captured video into several instant images for subsequent processing; detection mode process includes searching for candidate regions: Comparing the captured image with the template to select a candidate region that may be the eye; analyzing the candidate region step: confirming the eye region by using the group and geometric relationship in the candidate region; confirming the eye region step: using the real eye pattern Alignment to improve accuracy; tracking mode process includes eye area tracking step: tracking object area by object tracking method; tracking decision step: if the selected eye area cannot be tracked, the step of capturing image is restarted; if selected The eye area can be tracked, and the step of capturing the image is re-switched. Skip the search candidate area step and analyze the candidate area step; by the method described above, the infrared image is used as the video source, and can be used all day, and prevents the influence of the glasses and sunglasses provided by the user, and detects The mode flow is processed for a single screen, so the hardware processing can be speeded up, and the processing speed can be speeded up. The tracking mode process can skip the detection mode process when the tracking succeeds, which is also beneficial for the processing unit to process, so Creation is a kind of practical and progressive creation, worthy of promotion by the industry and publicity to the public.

本創作係有關於一種應用於人員監控之眼睛辨識追踨方法,其中該裝置係包括紅外線投光器、攝影機、濾片、處理單元,該紅外線投光器係可發出紅外線光源,而該攝影機上則設有濾片並與處理單元連接,其係可濾掉紅外線以外之光源,擷取紅外線影像做為視訊以供處理單元做處理,該處理單元則可分為影像處理單元與影像辨識單元,以便將視訊拆解為單一畫面,進而做後續之辨識、追蹤流程; 續請參閱第一圖所示,這種應用於人員監控之眼睛辨識追踨方法其係包括A.前置處理流程、B.偵測模式流程,與C.追蹤模式流程,而各流程之意義與詳細步驟則如後述。The present invention relates to an eye recognition tracking method for personnel monitoring, wherein the device comprises an infrared light projector, a camera, a filter, and a processing unit, wherein the infrared light projector emits an infrared light source, and the camera is provided with a filter. The film is connected to the processing unit, which can filter out the light source other than the infrared light, and take the infrared image as the video for processing by the processing unit. The processing unit can be divided into an image processing unit and an image recognition unit to disassemble the video. Solve as a single screen, and then do the subsequent identification and tracking process; Continued, as shown in the first figure, this method of eye identification tracking applied to personnel monitoring includes A. Pre-processing flow, B. Detection mode flow, and C. Track the pattern process, and the meaning and detailed steps of each process are described later.

A.前置處理流程:前置處理流程主要係配合紅外線投光器以及設有濾片之攝影機來達成,利用紅外線投光器提供紅外線光源,並由攝影機捕捉反射而來的紅外線光源產生紅外線影像,其係包括a.擷取影像步驟與b.影像序列步驟;A. Pre-processing flow: The pre-processing flow is mainly achieved by using an infrared light projector and a camera with a filter. The infrared light source is used to provide an infrared light source, and the infrared light source reflected by the camera captures an infrared image, which includes a. Capture image steps and b. Image sequence step;

a.擷取影像步驟:配合紅外線投光器之紅外線光源,而利用設有濾片之攝影機來擷取紅外線影像,以紅外線投光器而克服環境明暗度之變化,以及眼鏡、墨鏡之遮擋所造成之準確性問題,而再藉由濾片將紅外線以外之光源濾掉,進而提高影像之真實性;其中該紅外線投光器與攝影機係可設置於任意處,以車上使用為例,則可設置於冷氣孔、儀表板、後照鏡、遮陽板及定位柱等位置附近,但並不以此為限。a. Step of capturing image: In combination with the infrared light source of the infrared light projector, the camera with the filter is used to capture the infrared image, and the infrared light projector is used to overcome the change of the brightness of the environment and the accuracy caused by the shielding of the glasses and the sunglasses. The filter is used to filter out the light source other than the infrared light, thereby improving the authenticity of the image; wherein the infrared light projector and the camera system can be disposed at any position, and the vehicle can be used as an example, and can be disposed in the cold air hole and the instrument panel. , rear view mirror, sun visor and positioning column, etc., but not limited to this.

b.影像序列步驟:本步驟主要係為由紅外線影像而來之視訊拆解成數張即時畫面,任選出其中一張即時畫面,再以該張做為後續步驟之基準畫面,以畫面做為處理之標的,以節省系統需求與資源;其中該影像序列步驟係以一處理單元之影像處理單元做整理。b. Image sequence step: This step is mainly to disassemble the video from the infrared image into several instant pictures, and select one of the instant pictures, and then use the picture as the reference picture of the subsequent steps, and use the picture as the processing. In order to save system requirements and resources; the image sequence steps are organized by an image processing unit of a processing unit.

B.偵測模式流程:偵測模式流程主要係透過處理單元之影像辨識單元對影像序列步驟中所得之畫面做處理,利用c.搜尋候選區域步驟、d.分析候選區域步驟、e.確認眼睛區域步驟,由該畫面中確認出正確之眼睛區域位置;c.搜尋候選區域步驟:利用處理單元進行樣版辦識(Pattern Recognition),其主要係將上述所選之畫面與模擬樣版做比較,初步挑選出較可能為眼睛之候選區域;其中該模擬樣版之說明請參閱第二圖及第三圖所示,該模擬樣版之建置,主要係利用人體眼睛10之瞳孔11與虹膜12特徵而來,如第二圖所示係為一正常之眼睛10,其中在瞳孔11部分會呈現深色,而在虹膜12部分則呈像相較瞳孔11為淺之顏色,藉由此一特徵建置出樣版則如第三圖所示,該樣版主要係以深色處為中心,建立一中心區13,而該中心區周圍則為較淺色之周圍區14,該中心區13與周圍區14排列則概呈十字形;因此,處理單元在進行樣板辦識時,係以中心區13與周圍區14之態樣,尋找該畫面上各個符合中心深色且周圍淺色條件之區域,並將所選出的各個區域皆列為候選區域,其中該中心區13與周圍區14之排列組合方式並不十字形或五個為限。B. Detection mode flow: The detection mode process mainly processes the image obtained in the image sequence step through the image recognition unit of the processing unit, and uses c. Search for candidate area steps, d. Analysis candidate region step, e. Confirm the eye area step, and confirm the correct eye area position from the screen; c. Step of searching for candidate regions: using the processing unit to perform pattern recognition, which mainly compares the selected images with the simulated patterns, and initially selects candidate regions that are more likely to be eyes; wherein the simulation template For the description, please refer to the second and third figures. The simulation model is mainly constructed by using the pupil 11 and the iris 12 features of the human eye 10, as shown in the second figure as a normal eye. 10, wherein the portion of the pupil 11 will appear dark, and in the portion of the iris 12, the image is lighter than the pupil 11, and the pattern is constructed by using a feature as shown in the third figure. Mainly centering on the dark color, a central area 13 is established, and the surrounding area is a lighter colored surrounding area 14, and the central area 13 and the surrounding area 14 are arranged in a cross shape; therefore, the processing unit is When performing the template, the central area 13 and the surrounding area 14 are used to find the areas on the screen that match the dark color of the center and the surrounding light conditions, and select the selected areas as candidate areas. The central area 13 and the surrounding area 14 Column combination does not cross or five limited.

d.分析候選區域步驟:處理單元先經過上一步驟而初步列出數個可能是眼睛的候選區域,本步驟則係要進一步分析各候選區域為眼睛的可能性,並挑選出最可能為眼睛的區域;其主要係利用連通元件演算法(Connected Component Labeling),並搭配群組性與幾何關係分析來進行,該連通元件演算法之原理,主要係將上述所選出之各個候選區域,利用其群組性與幾何關係,諸如大小、形狀、相對位置等條件,篩選出可能性最高的候選區域,以第四圖至第六圖為例,其中第四圖係一正常人臉部20示意,該臉部20上包括有眼睛21、鼻部22,而透過上一步驟則可初步選出中心深色周圍淺色,較可能為眼睛之候選區域,如第五圖所示,其分別為第一候選區域211以及第二候選區域221,然而,透過連通元件演算法以群組性與幾何關係做分析,該第一候選區域211之大小較符合眼睛之大小故可能性較高、第二候選區域221則較小而可能性較低,該第一、第二候選區211、221之形狀上皆略符合眼睛之形狀,該第一候選區211之相對位置適中故可能性較高、第二候選區域221之相對位置過窄而可能性較低,透過上述之條件分析,如此,則可判定出該第一候選區域211為眼睛區域21,並將該眼睛區域定位如第六圖所示。d. Step of analyzing candidate regions: The processing unit firstly lists a plurality of candidate regions that may be eyes through the previous step. This step further analyzes the possibility that each candidate region is an eye, and selects the region most likely to be an eye. The main method is to use Connected Component Labeling, and the grouping and geometric relationship analysis is carried out. The principle of the connected component algorithm mainly uses the selected candidate regions to use the group. The relationship between sex and geometry, such as size, shape, relative position, etc., selects the candidate area with the highest probability, taking the fourth to sixth figures as an example, wherein the fourth picture is a normal human face 20, the face The part 20 includes an eye 21 and a nose 22, and through the previous step, a light color around the center dark color is preliminarily selected, which is more likely to be a candidate area of the eye, as shown in FIG. 5, which are respectively the first candidate area. 211 and the second candidate region 221, however, are analyzed by a connected component algorithm in a group and geometric relationship, the size of the first candidate region 211 is more in line with the eye. The possibility of the size is higher, the second candidate area 221 is smaller and less likely, and the shapes of the first and second candidate areas 211 and 221 are slightly in accordance with the shape of the eye, and the relative of the first candidate area 211 If the position is moderate, the possibility is relatively high, and the relative position of the second candidate region 221 is too narrow and the possibility is low. According to the above condition analysis, the first candidate region 211 can be determined as the eye region 21, and the The eye area is positioned as shown in the sixth figure.

e.確認眼睛區域步驟:本步驟主要係做一進一步之確認動作,亦可將本步驟省略以加速進行;確認眼睛區域步驟其係利用真實眼睛樣版做比對,以提高準確性,相較於模擬樣板,該真實眼睛樣板係指一般正常人之眼睛所製成之樣版,但並不需限定使用者之眼睛,故可在提高準確性的同時,兼顧其適用性。e. Confirmation of eye area steps: This step is mainly to make a further confirmation action, and this step can be omitted to speed up the process; the eye area step is confirmed by using the real eye pattern to improve the accuracy, compared with the simulation. The model, the real eye model refers to a pattern made by the eyes of a normal person, but does not need to limit the eyes of the user, so the accuracy can be improved, and the applicability can be considered.

C.追蹤模式流程:當偵測模式流程利用處理單元將眼睛區域,由所選取之畫面中分析與確認完成後,便會進入追蹤模式流程,其係由處理單元之影像辨識單元做處理,以持續追蹤該眼睛區域,且直到追蹤模式流程無法成功追蹤眼睛區域為止,無需再進入偵測模式流程,意即在成功的確認眼睛區域的情況下,前置處理流程選取完畫面之後,將直接進入追蹤模式流程,以節省系統資源之利用,故本流程包括有f.眼睛區域追蹤步驟與g.追蹤判定步驟;f.眼睛區域追蹤步驟:當上一流程由所擷取畫面中確認出眼睛區域後,本步驟係利用處理單元以物件追蹤(Object Tracking)方式進行追蹤,該物件追蹤方式眾多,本實施例以追蹤演算法(CamShift)為主,但並不以此為限,該追蹤演算法過程如下:I.計算整張影像中像素分布資訊。C. Tracking mode process: When the detection mode process uses the processing unit to analyze and confirm the eye area from the selected image, it will enter the tracking mode process, which is processed by the image recognition unit of the processing unit to continuously track The eye area, and until the tracking mode process cannot successfully track the eye area, there is no need to enter the detection mode process, that is, in the case of successfully confirming the eye area, the pre-processing flow selects the screen and directly enters the tracking mode. Process to save the use of system resources, so this process includes f. Eye area tracking step with g. Tracking decision step; f. Eye area tracking step: When the previous process confirms the eye area from the captured image, this step uses the processing unit to track by Object Tracking. The object tracking method is numerous, and the embodiment uses tracking algorithm. The method (CamShift) is the main method, but it is not limited to this. The tracking algorithm process is as follows: I. Calculate the pixel distribution information in the entire image.

Ⅱ.選定一初始區域(本實施例中即眼睛區域),以稍大於此區域的區塊當作搜尋視窗(Search Window),並計算其中的像素分布。II. An initial area (the eye area in this embodiment) is selected, and a block slightly larger than this area is used as a search window, and the pixel distribution therein is calculated.

Ⅲ.執行移動平均法(Mean Shift)計算,找出搜尋視窗的中心,並重新定義中心點與更新零階動差(Zero-Moment)的資訊。III. Perform a moving average (Mean Shift) calculation to find the center of the search window and redefine the center point and update the Zero-Moment information.

Ⅳ.根據Ⅲ的結果重新定義搜尋視窗的中心,並根據零階動差重新定義視窗大小,之後重新進行Ⅱ。IV. Redefine the center of the search window according to the result of III, and redefine the window size according to the zero-order motion difference, and then restart II.

藉由追蹤演算法對眼睛區域追蹤,每次追蹤完成後則進行下一步驟。The tracking of the eye area is performed by the tracking algorithm, and each time the tracking is completed, the next step is performed.

g.追蹤判定步驟:根據上一步驟判定是否追蹤成功,若追蹤失敗,則表示眼睛區域已有相當程度之位移,亦有可能是人員已離開或已閉眼之情況,故需再回到A.前置處理流程,重新建立眼睛區域,若持續無法建立眼睛區域,則亦可搭配習知之警示機構,做人員已離開或已閉眼之警示;若追蹤成功,則回到A.前置處理流程,並跳過B.偵測模式流程,持續進行追蹤。g. Tracking decision step: According to the previous step, it is determined whether the tracking is successful. If the tracking fails, it indicates that the eye area has been displaced to a considerable extent. It may also be that the person has left or closed the eye, so it is necessary to return to A. The pre-processing process re-establishes the eye area. If the eye area cannot be established continuously, it can also be combined with a conventional warning mechanism to make a warning that the person has left or has closed his eyes; if the tracking is successful, return to A. Pre-processing flow and skip B. Detect mode flow and keep track of it.

由上所述者僅為用以解釋本創作之較佳實施例,並非企圖據以對本創作做任何形式上之限制,是以,凡有在相同之創作精神下所做有關本創作之任何修飾或變更者,皆仍應包括在本創作意圖保護之範疇內。The above description is only a preferred embodiment for explaining the present creation, and is not intended to impose any form limitation on the creation, so that any modification related to the creation in the same creative spirit is made. Or the changer should still be included in the scope of this creative intent.

綜上所述,本創作應用於人員監控之眼睛辨識追踨方法在結構設計、使用實用性及成本效益上,確實是完全符合產業上發展所需,且所揭露之結構發明亦是具有前所未有的創新構造,所以其具有「新穎性」應無疑慮,又本發明可較之習知結構更具功效之增進,因此亦具有「進步性」,其完全符合我國專利法有關發明專利之申請要件的規定,乃依法提起專利申請,並敬請 鈞局早日審查,並給予肯定。In summary, the eye recognition identification method applied to personnel monitoring in the structure design, practical use and cost-effectiveness is indeed fully in line with the development of the industry, and the disclosed structural invention is unprecedented. Innovative structure, so its "newness" should be undoubted, and the invention can be more effective than the conventional structure, so it is also "progressive", which fully complies with the application requirements of the invention patents of the Chinese Patent Law. The stipulation is that a patent application is filed in accordance with the law, and the bureau should be reviewed at an early date and affirmed.

10‧‧‧眼睛10‧‧‧ eyes

11‧‧‧瞳孔11‧‧‧瞳孔

12‧‧‧虹膜12‧‧‧Iris

13‧‧‧中心格13‧‧‧Center

14‧‧‧周圍格14‧‧‧ surrounding

20‧‧‧臉部20‧‧‧ Faces

21‧‧‧眼睛21‧‧‧ eyes

211‧‧‧第一候選區域211‧‧‧First candidate area

22‧‧‧鼻部22‧‧‧Nose

221‧‧‧第二候選區域221‧‧‧second candidate area

第一圖係本發明流程步驟圖。The first figure is a flow chart of the process of the present invention.

第二、三圖係本發明模擬樣版說明示意圖。The second and third figures are schematic diagrams illustrating the simulation of the present invention.

第四到六圖係本發明候選區域與眼睛區域說明示意圖。The fourth to sixth figures are schematic diagrams illustrating candidate regions and eye regions of the present invention.

Claims (8)

一種應用於人員監控之眼睛辨識追蹤方法,其係包括下列步驟:擷取影像步驟:配合紅外線光源而擷取視訊;影像序列步驟:將所擷取之視訊拆解成數張即時畫面以做後續處理;搜尋候選區域步驟:將上述所擷取之畫面與樣版比較挑選出可能為眼睛之候選區域;分析候選區域步驟:由候選區域中利用群組性與幾何關係確認眼睛區域;眼睛區域追蹤步驟:利用物件追蹤方式追蹤眼睛區域;追蹤判定步驟:若所選擇之眼睛區域無法被追蹤,則由擷取影像步驟重新開始;若所選擇之眼睛區域可被追蹤,則略過搜尋候選區域步驟以及分析候選區域步驟,並持續對所讀入之畫面進行追蹤處理;藉由上述步驟而可有效達到眼睛追蹤以及提高準確性之效。An eye recognition tracking method applied to personnel monitoring includes the following steps: capturing an image step: capturing video with an infrared light source; and performing image sequence steps: disassembling the captured video into several instant images for subsequent processing Step of searching for a candidate region: comparing the captured image with the template to select a candidate region that may be an eye; analyzing the candidate region: confirming the eye region by using group and geometric relationships in the candidate region; eye region tracking step Tracking the eye area by object tracking method; tracking determination step: if the selected eye area cannot be tracked, the step of capturing the image is restarted; if the selected eye area can be tracked, the step of searching for the candidate area is skipped and The candidate region step is analyzed, and the read image is continuously tracked; the above steps can effectively achieve eye tracking and improve accuracy. 根據申請專利範圍第1項所述之應用於人員監控之眼睛辨識追蹤方法,其中各步驟依其屬性係分為前置處理流程、偵測模式流程與追蹤模式流程;該前置處理流程係包括擷取影像步驟與影像序列步驟;該偵測模式流程係包括搜尋候選區域步驟與分析候選區域步驟;該追蹤模式流程係包括眼睛區域追蹤步驟與追蹤判定步驟。The eye recognition tracking method applied to personnel monitoring according to Item 1 of the patent application scope, wherein each step is divided into a pre-processing process, a detection mode process, and a tracking mode process according to the attribute system; the pre-processing process includes The image capturing step and the image sequence step are performed; the detecting mode process includes a step of searching for a candidate region and a step of analyzing a candidate region; the tracking mode process includes an eye region tracking step and a tracking determination step. 根據申請專利範圍第2項所述之應用於人員監控之眼睛辨識追蹤方法,其中該偵測模式流程在分析候選區域步驟後更包括有一確認眼睛區域步驟,其係利用真實眼睛樣版做比對,以提高準確性。The eye recognition tracking method applied to personnel monitoring according to item 2 of the patent application scope, wherein the detection mode process further comprises a step of confirming an eye region after the step of analyzing the candidate region, which uses a real eye pattern for comparison To improve accuracy. 根據申請專利範圍第1項或第2項或第3項中所述之應用於人員監控之眼睛辨識追踨方法,其中該分析候選區域步驟係以連通元件演算法利用群組性幾何關係做處理,該群組性與幾何關係係指包括大小、形狀、相對位置之至少一種。 An eye recognition tracking method for personnel monitoring according to the scope of claim 1 or 2 or 3, wherein the analysis candidate region step is processed by a connected component algorithm using a group geometry relationship The group and geometric relationship refers to at least one of size, shape, and relative position. 根據申請專利範圍第1項或第2項或第3項中所述之應用於人員監控之眼睛辨識追踨方法,其中該擷取影像步驟係以紅外線投光器配合設有濾片之攝影機所達成。 The eye recognition tracking method applied to personnel monitoring according to the scope of claim 1 or 2 or 3 of the patent application, wherein the capturing image step is achieved by an infrared light projector and a camera provided with a filter. 根據申請專利範圍第1項或第2項或第3項中所述之應用於人員監控之眼睛辨識追踨方法,其中該影像序列步驟係以影像處理單元來達成。 The eye recognition tracking method applied to personnel monitoring according to the first or second item or the third item of the patent application, wherein the image sequence step is achieved by an image processing unit. 根據申請專利範圍第1項或第2項或第3項中所述之應用於人員監控之眼睛辨識追踨方法,其中該搜尋候選區域步驟、分析候選區域步驟、眼睛區域追蹤步驟、追蹤判定步驟係以影像辨識單元來達成。 The eye recognition tracking method for personnel monitoring according to the first or second item or the third item of the patent application, wherein the search candidate area step, the analysis candidate area step, the eye area tracking step, and the tracking determination step It is achieved by an image recognition unit. 根據申請專利範圍第3項所述之應用於人員監控之眼睛辨識追踨方法,其中該確認眼睛區域步驟係以影像辨識單元來達成。 The eye recognition tracking method applied to personnel monitoring according to item 3 of the patent application scope, wherein the step of confirming the eye area is achieved by an image recognition unit.
TW96133980A 2007-09-12 2007-09-12 Used in the monitoring of the eye to identify the method TWI411425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW96133980A TWI411425B (en) 2007-09-12 2007-09-12 Used in the monitoring of the eye to identify the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW96133980A TWI411425B (en) 2007-09-12 2007-09-12 Used in the monitoring of the eye to identify the method

Publications (2)

Publication Number Publication Date
TW200911195A TW200911195A (en) 2009-03-16
TWI411425B true TWI411425B (en) 2013-10-11

Family

ID=44724618

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96133980A TWI411425B (en) 2007-09-12 2007-09-12 Used in the monitoring of the eye to identify the method

Country Status (1)

Country Link
TW (1) TWI411425B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI469061B (en) * 2012-12-19 2015-01-11 Nat Univ Chung Hsing Applies to eye recognition methods and identification systems

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US111111A (en) * 1871-01-24 Improvement in type-casting machines
TW200624082A (en) * 2005-01-07 2006-07-16 Univ Nat Central Method and apparatus of communication through eye movements and analysis method of communication through eye movements

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US111111A (en) * 1871-01-24 Improvement in type-casting machines
TW200624082A (en) * 2005-01-07 2006-07-16 Univ Nat Central Method and apparatus of communication through eye movements and analysis method of communication through eye movements

Also Published As

Publication number Publication date
TW200911195A (en) 2009-03-16

Similar Documents

Publication Publication Date Title
CN102802502B (en) For the system and method for the point of fixation of tracing observation person
US11330200B2 (en) Parallax correction using cameras of different modalities
US9852339B2 (en) Method for recognizing iris and electronic device thereof
JP6322986B2 (en) Image processing apparatus, image processing method, and image processing program
WO2022095440A1 (en) Self-driving-oriented human-machine collaborative perception method and system
US10254831B2 (en) System and method for detecting a gaze of a viewer
JP2003015816A (en) Face/visual line recognizing device using stereo camera
Rakhmatulin et al. Deep neural networks for low-cost eye tracking
JP2003150942A (en) Eye position tracing method
US20150268728A1 (en) Systems and methods for notifying users of mismatches between intended and actual captured content during heads-up recording of video
CN109145734A (en) Algorithm is captured in IPC Intelligent human-face identification based on 4K platform
WO2017206042A1 (en) Method and apparatus for seeing through obstruction using smart glasses
WO2021204211A1 (en) Method and apparatus for acquiring facial image and iris image, readable storage medium, and device
CN109076176A (en) The imaging device and its illumination control method of eye position detection device and method, imaging sensor with rolling shutter drive system
US20110128386A1 (en) Interactive device and method for use
CN112800815A (en) Sight direction estimation method based on deep learning
CN110462625A (en) Face recognition device
TWI411425B (en) Used in the monitoring of the eye to identify the method
CN104604219B (en) Image processing apparatus and image processing method
CN101393602B (en) Be applied to the eye identifying method for tracing of personnel control
CN110598635B (en) Method and system for face detection and pupil positioning in continuous video frames
KR20120049605A (en) Apparatus and method for detecting center of pupil
KR101122513B1 (en) Assuming system of eyeball position using 3-dimension position information and assuming method of eyeball position
CN115965950A (en) Driver fatigue detection method based on multi-feature fusion state recognition network
JP2014170978A (en) Information processing device, information processing method, and information processing program