TW201124917A - Alignment method and alignment apparatus of pupil or facial characteristics - Google Patents

Alignment method and alignment apparatus of pupil or facial characteristics Download PDF

Info

Publication number
TW201124917A
TW201124917A TW99101126A TW99101126A TW201124917A TW 201124917 A TW201124917 A TW 201124917A TW 99101126 A TW99101126 A TW 99101126A TW 99101126 A TW99101126 A TW 99101126A TW 201124917 A TW201124917 A TW 201124917A
Authority
TW
Taiwan
Prior art keywords
image
module
pupil
instant
processing
Prior art date
Application number
TW99101126A
Other languages
Chinese (zh)
Other versions
TWI447659B (en
Inventor
Tsang-Chi Li
Yao-Tsung Hung
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Priority to TW099101126A priority Critical patent/TWI447659B/en
Publication of TW201124917A publication Critical patent/TW201124917A/en
Application granted granted Critical
Publication of TWI447659B publication Critical patent/TWI447659B/en

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An alignment method of pupil or facial characteristics is disclosed. The alignment method includes the steps of capturing a real-time image; transforming the real-time image into a non-photorealistic real-time image; displaying the non-photorealistic real-time image; analyzing the real-time image to acquire pupil data or facial characteristic data; and determining whether the user is in the operation zone in accordance with the pupil data or the facial characteristic data. With the alignment method of pupil or facial characteristics of the present invention, it can prevent the users from unconformable feeling and thereby increase users' willing to use eye tracking systems. An alignment apparatus of pupil or facial characteristics is also disclosed.

Description

201124917 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種曈孔或臉部特徵對位方法與裝 置’特別種應用於人眼追蹤的瞳孔或臉部特徵對^ 方法與裝置。 【先前技術】 溝通是人類生活中極其重要的部份,做為社會團體中 的-份子,没有人能完全免除與他人產生互動,至於如何 使這:互動::係維持在一種正面且積極的狀態,則就有賴 於人與人之間完整的意思表達。 權/ 展人類為維繫感情或連絡事務,甚至於爭取 權> ’已,展出各式各樣的溝 夠完整地表達本身的意思,這些 二 晝,透過這些許許多多的溝通方 富’也促使團體生活更加多采多姿 表情動作1及能夠代表文化早直接的 蚩,砝讲衿此从 …京的聲音語言或文字圖 式使得人類的生活更豐 智慧,激盪出科學的火花,著實為力並從中孕月了人文的 但是,對於少數疾病患者與遭2人類文明的基礎。 於其t樞神經受損或其他因素,敦彳大事故者而。由 控制肢體或張口言語,即便患者^無法如常人般順利地 考能力’料_完縣達。本身絲意識及思 與看護人員間便會因為溝通無法建下’患者或傷者 擾,更無法藉由患者本身將自己的$ ^許#不便的因 疋理及心理狀況反應給 201124917 醫護人員,導致康復時程延長。 目前,習知技術中已有利用人眼追蹤的方式,透過適 當的顯示媒介,作為患者及傷者意思表示的一種手段,也 就是所謂的眼控系統。但是由於每位使用者面貌輪廓、眼 睛特徵或使用習慣等等因素的不同,此眼控系統在操作前 一般都會藉由擷取並顯示影像的方式先行引導使用者進 行臉部的對位,尤其疋瞳孔的部分。然而,由於系統的使 、用者大部分都長久臥病在床,其神情氣色或操作環境可能 :都不甚良好,特別是對顏面受傷或癌症末期的患者而言, 必定不願在系統的對位過程中見到自己的病容,影響其使 用意願。除此之外,由於眼控系統通常還配備有辅助的光 學儀器’而儀器所射出的光線會在人的眼球中形成反光 點,對此,即便是生理狀況正常的使用者,若觀看到眼睛 内有異物的晝面,也難免會有不舒適的感覺。 因此,如何提供一種瞳孔或臉部特徵對位方法與裝 置,其能辅助使用者於非真實型式之影像晝面中進行瞳孔 或臉邛特徵的對位,避免對位過程中產生不舒適的感覺, 從而提高使用者的使用意願,已成為重要課題之一。. 【發明内容】 有鑑於上述課題,本發明之目的為提供一種曈孔或臉 邰特徵對位方法與裝置,其能辅助使用者於非真實型式之 影像晝面中進行瞳孔或臉部特徵的對位,避免產生不舒適 的感覺’提高使用意願。 201124917 為達上述目的,依據本發明之一種瞳孔 含以下步驟:擷取一即時影像;轉換即時 法,包 型式即時影像;顯示非真實型式即時影像=為一非真實 以取得一瞳孔資料;以及依據所取得瞳孔資=即時影像 是否位於一使用位置範圍。 、4判斷使用者 依據本發明較佳實施例,非真實型式即 f水印處理、柔化處理、浮雕處理、馬賽克^象可為經 理、_測處理、紋理化處理、油晝處理二插邊處 k階處理的即時影像。 …、白處理或 依據本發明較佳實施例,使用者可進行一維 維方向或二維方向的位置調整。 白一 依據本發明較佳實施例,瞳孔對位方法可入曰 一指不訊號’料使用者進行位置調整位置的步=提供 依據本發明較佳實施例,瞳孔資料可包含一 及至少一反光點的資料。 里札中心 、為達上述目的,依據本發明之—種臉部特徵對位方 法,包含以下步驟:榻取一即時影像;轉換即時 非真實型式㈣影像;顯示非真實型式即時影像;'分析即 時影像以取得1部特徵資料;以及依據所取得臉部特徵 貧料判斷使用者是否位於一使用位置範圍。 依據本發明較佳實施例,臉部特徵資料可包含臉型輪 扉、五官相對位置、臉部凸起部位、至少_眼球或至少二 瞳孔的資料。 為達上述目的,依據本發明之一種瞳孔對位裝置,包 201124917 含一影像擷取模組、一爭後 像分析模組以及-控制;:轉=、-顯示模組、-影 ===組與影像榻取模組連接,且影像轉換 租盘与像韓奸;〜料—非*實型切時影像。顯示模 ==接,且顯示模組用於顯示非真實型式 析模电用;刀賴組與影像擷取模組連接,且影像分 S t 時影像以取得-曈孔資料。控制模組與 ==影像轉換模組、影像分析模組以及顯示模 位π 11她依據所取得魏資㈣斷使用者是否 位於一使用位置範圍。 元件j本&枝佳實施例,影像#1取模Μ可為電荷耦合 化物半導Deviee,CCD)攝料或互補金屬氧 C^0S) 〇mP ementary Metal 〇Xide Semiconductor^201124917 VI. Description of the Invention: [Technical Field] The present invention relates to a pupil or facial feature aligning method and apparatus for a pupil or facial feature pair method and apparatus for human eye tracking. [Prior art] Communication is extremely important in human life part, as social groups - the elements, no one can completely remove interact with others, as to how to make it: Interactive :: lines were maintained in a positive and proactive The state depends on the complete expression of meaning between people. The right to show humanity to maintain relationships or contact affairs, and even to fight for power > 'have, exhibiting a wide variety of grooves to fully express their own meaning, these two, through these many communication parties rich ' It also promotes group life with more colorful expressions and actions that can represent the culture's early directness. 砝 衿 从 从 从 从 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京 京For the sake of strength and humanity from the middle of the pregnancy, however, for a small number of patients with the basis of 2 human civilization. In the case of its t-axis nerve damage or other factors, Dunhuang accidents. By controlling the limbs or opening the mouth, even if the patient ^ can't as smoothly as the average person, the ability to test is expected. The consciousness of the silk itself and the care and care staff will not be able to build a 'patient or injured person' because of communication, and it is impossible to respond to the 201124917 medical staff by the patient's own inconvenience and psychological condition. The duration of rehabilitation is extended. At present, the method of human eye tracking has been used in the prior art, and the appropriate display medium is used as a means for expressing the meaning of the patient and the injured person, that is, the so-called eye control system. However, due to the different facial features, eye features or usage habits of each user, the eye control system generally guides the user to face the face by capturing and displaying the image before the operation, especially The part of the pupil. However, because the system allows, with the majority of those who are confined to bed for a long time, it looked color or operating environment may be: are not very good, especially for patients with facial injury or terminal cancer, will not want to in the system In the process of seeing his own illness, he will influence his willingness to use. In addition, since the eye control system is usually equipped with an auxiliary optical instrument', the light emitted by the instrument will form a reflective point in the eyeball of the person, even if the user with normal physiological condition sees the eye. There is a foreign body inside, and it is inevitable that there will be an uncomfortable feeling. Therefore, how to provide a boring or facial feature aligning method and device can assist the user to perform the aligning of the pupil or facial features in the non-realistic image squatting surface to avoid the feeling of discomfort during the alignment process. Therefore, increasing the user's willingness to use has become one of the important topics. SUMMARY OF THE INVENTION In view of the above problems, an object of the present invention is to provide a method and apparatus for aligning pupil or face features, which can assist a user in performing pupil or facial features in a non-realistic image plane. Match the position to avoid feeling uncomfortable 'increasing the willingness to use. 201124917 In order to achieve the above object, a pupil according to the present invention comprises the steps of: capturing a real-time image; converting a real-time method, a package-type instant image; displaying a non-realistic real-time image=is a non-realistic to obtain a pupil data; The obtained pupil = whether the instant image is in a range of use positions. 4, judging the user according to the preferred embodiment of the present invention, the non-real type, that is, f watermark processing, softening processing, embossing processing, mosaic image can be manager, _ measuring processing, texturing processing, oil processing, two hem Instant image of k-order processing. ..., white processing or in accordance with a preferred embodiment of the present invention, the user can make positional adjustments in one or two dimensions. According to a preferred embodiment of the present invention, the pupil alignment method can be used to perform a position adjustment position. According to a preferred embodiment of the present invention, the pupil data can include one and at least one reflective Point information. The Riza Center, in order to achieve the above object, the facial feature aligning method according to the present invention comprises the following steps: taking a live image on the couch; converting the instant non-real type (4) image; displaying the non-real type instant image; The image is obtained by acquiring one characteristic data; and determining whether the user is located in a use position range according to the acquired facial features. According to a preferred embodiment of the present invention, the facial feature data may include data of a face rim, a relative position of the facial features, a raised portion of the face, at least an eyeball, or at least two pupils. In order to achieve the above object, according to the present invention, a pupil alignment device, the package 201124917 includes an image capture module, a post-image analysis module, and a control;: turn =, - display module, - shadow === The group is connected with the image couching module, and the image conversion renter is like a Korean rape; the material-non-real type cut-time image. The display module == is connected, and the display module is used to display the non-real type type of the mold-discharging type; the knife-and-draw group is connected with the image capturing module, and the image is divided into the image to obtain the pupil data. The control module and the == image conversion module, the image analysis module, and the display module π 11 are based on whether or not the user is located in a range of use positions. Element j & preferred embodiment, image #1 can be a charge coupled compound semi-conductive Deviee, CCD) or complementary metal oxygen C^0S) 〇mP ementary Metal 〇Xide Semiconductor^

、達上述目的,依據本發明之一種臉部特徵對位裝 =包含—影像擷取模組、—影像轉換模組、—顯示獅# 一〜像^賴組以及—㈣肋。影像擷取模組用於掘取 一即時影像° f彡像賴齡與料擷取触連接,且影像 轉換模組用於轉換㈣影像為—非真實型式㈣影像。顯 7模、、且與衫像轉換模組連接,且顯示模組用於顯示非真實 型^即時影像。影像分賴組與影像擷取模組連接,且影 像分析模叙用於分析即時影像以取得一臉部特徵資料。控 ,模虹與景彡像_模組、影像㈣模組、影像分析模組及 ”’、、、"且連接,且控制模組依據所取得臉部特徵資料判斷 201124917 使用者是否位於一使用位置範圍。 承上所述,因依據本發明之瞳孔或臉部特徵對位方法 與裴置可將即時影像轉換為非真實型式即時影像,並以此 作為使用者進行位置調整的依據’從而可避免在對位過程 中使用者因為直視本身面貌及/或察覺眼中出現異常的反 光點所產生不舒適的感覺《與習知技術相較,應用本發明 於限控系統中,不僅可簡化操作前的對位程序,藉由轉換 嗅之非真實型式即時影像亦可防止眼球追蹤的技術輕易 敕外人窺知。當然’最重要的是能保障使用者的感受,進 而提高使用的意願,是為一種人性化的設計。 【T施方式】 以下將參照相關圖式,說明依本發明較佳實施例之一 =孔或臉部特徵對位方法與裝置,其巾^元件將以 同的參照符號加以說明。 =考圖i所示,依據本發明較佳實施例之瞳㈣位 下步驟S100至步驟S140。於步驟sio〇,操取 拖於步驟sll〇,轉換即時影像為非真實型式即 式可將擷取的即時影像經過特殊處理,例 硬;=水!處理、Γ處理、浮雕處理、馬赛克處 惠白、纹理化處理、油晝處理、 適的=處理等,避免因直視本身面貌而產生不舒 於步驟⑽,顯料真實^ ”影像,作為使用者 201124917 進行位置調整的依據,使用者可調整所在位置而顯示出例 如臉部全部影像。此時使用者可參照顯示出的自身影像, 進行一維方向、二維方向或三維方向的位置調整。詳細來 說,一維方向的調整例如僅進行左右方向、上下方向或前 後方向的調整;二維方向則例如同時及/或分別進行上下左 右方向、前後左右方向或上下前後方向的調整;至於三維 方向為同時及/或分別進行左右、上下及前後方向調整,減 少調整所需時間。 於步驟S130,分析即時影像以取得一瞳孔資料,瞳孔 資料可包含單一瞳孔資料,或同時取得二個瞳孔資料以提 高資料分析的準確度。瞳孔資料可包含一瞳孔中心及至少 一反光點的資料,經由例如分析軟體/硬體取得的瞳孔中心 及眼球上的反光點,可作為分析資料。 於步驟S140,依據所取得瞳孔資料判斷使用者是否位 於一使用位置範圍,使用位置範圍例可如為使用者相對於 顯示器上下左右各30度以内,前後距離(使用者與顯示 器之間的距離)60〜80cm。需注意者,此僅為舉例以方便 瞭解,非用以限制本發明。 此外,瞳孔對位方法可更包含提供一指示訊號,引導 使用者調整位置的步驟。指示訊號可例如方向指示訊號, 其可顯示在顯示器上的其他位置或整合於非真實型式即 時影像中,用以提示使用者如何調整位置。指示訊號亦可 以語音、聲響(時間長短或聲響數等)或發光(持續亮光 或閃.光等)的方式引導使用者調整位置。 201124917 上述瞳孔對位方法可實施於一瞳孔對位襄置。靖參考 圖=及圖3所示,曈孔對位裝置2包含一影像擷取模二>21、 :像分析模組24 衫像轉換模組22、一顯示模組23、 以及一控制模組25。 影像擷取模組21是用於擷取即時影像。在本實施例 中,影像擷取模組21例如但不限於電荷耦合元件(cCD) 、互補金屬氧化物半導體(CM〇s)攝影機或其他 限制i目當r影像榻取模組21的數目亦無 . 冬明的其他態樣中,瞳孔 個影像擷取模《且21,並夂“丄由 衮置2可包含一 ^ 1其各自對應於使用者的左右瞳孔。 換模模組22與影像娜模組21連接,且影像轉 像轉換方式真實型式即時影像。影 用& i _模且23與影像轉換模組21連接,且顯示模組23 真實形式的㈣影像,作為使用者進行位置調 氺二顯不模組23例如但不限於液晶顯示器(LCD)、 ^極體(LED)顯示器、有機發光二極體(〇led) 鸟:器或電子紙(e_paper)顯示器等,而非真實塑式即時 以全晝面或以全畫面之—部分的方式顯示,大小是 乂適合使用者參考為原則。 月,考圖4所示,若未經過轉換,顯示模組23顯示 I °的,即時影像P1便與-般的即時晝面相同,是,種 ^人眼觀察相同的視覺結果。相對而言,請參考圖5所米’ 實施例中,顯示模組23所顯示的是經過特殊處理轉 201124917 換後所產生非真實啷彳 察的視覺結料同。卩㈣彡像P2’與正伙訂人眼觀 柄n 純、且24 *影像_取模組21連接’且影像分 析杈組24用於分紅日山 *上·〜1豕刀 式可例如影像分析模:2=:曈孔資料。分析的方 存有瞳孔中心、眼球^利用例如預設的資料庫,其儲 光古點蓉1 P、眼球圖樣、黑白色差變化或反 九儿點4貝枓,經過影 所需的瞳孔資料。像刀析模組24比對分析後,取得 1 = : 25與影像摘取模組21、影像轉換模組22、 ^二析模組24及顯示模組23連接,且控龍組25用 ^據所取付瞳孔資料來判斷使用者是否位於使用位置 _1斷的標準可依據曈孔資料中是否包含例如瞳孔中 心與反光亮點、眼球輪廓是否完整、眼球圖樣是否符合、 或…白色差i:化疋否顯著等條件’經過與預設資料庫比對 分析,判斷使用者是否位於❹位置㈣。❹位置範圍 已舉例詳述於上,不再贅述。 在本貫知例中’控制模組25可在判斷使用者已經位 於使用位置範圍後’發出控制訊號控制瞳孔對位裝置2進 ^後續步驟’例如終止影像擷取模組21娜影像及/或顯 示操作介面於顯示模組23上。反之,若控賴組Μ判斷 使用者仍未位於使驗置範圍時,瞳孔對位裝置2則持續 辅助使用者進行位置調整。 在本實施例中,瞳孔對位裝置2可更包含至少一光源 發射模組,請參考圖3所示,瞳孔對位裝置2較佳包含二 11 201124917 光源發射模、组26’#中光源發射模組26可例如紅外線( 生易於辨識的反光點 彻例所示的瞳孔對位褒置2雖以影像操取模,且 21、影像轉換模組22、顯示模組23、影像分析模組μ 控制模組25及光源發射模組26整合為單—電子社 作說明、然而本領域具有通常知識者當知,瞳孔對 2亦可結合於既有個人電腦使用。例如將影像榻取模” 及光源發射模組26結合於既有個人電腦的顯示In order to achieve the above object, according to the present invention, a facial feature alignment device includes an image capturing module, an image conversion module, a display lion #1~image group, and a (four) rib. The image capture module is used to capture an instant image, and the image conversion module is used to convert (4) the image into a non-realistic (four) image. The display module is connected to the shirt image conversion module, and the display module is used to display the non-real type ^ instant image. The image separation group is connected to the image capturing module, and the image analysis model is used to analyze the instant image to obtain a facial feature data. Control, with the mold Hong King San _ image module, the video module (iv), and image analysis module " ',,, " and connected, and the control module based on facial feature information obtained in determining whether the user is located at a 201 124 917 The position range is used. According to the present invention, the pupil or facial feature aligning method and device can convert the instant image into a non-real type instant image, and use this as a basis for the user to adjust the position. It can avoid the user's uncomfortable feeling caused by looking directly at the face and/or detecting the abnormal reflection point in the eye during the alignment process. Compared with the prior art, the application of the invention in the limit control system can not only simplify the operation. The former alignment program can also prevent the eye tracking technology from being sneaked out by the outside world by converting the non-realistic real-time image. Of course, the most important thing is to protect the user's feelings and enhance the willingness to use. A humanized design. [T mode] Hereinafter, a method for aligning a hole or a face feature according to a preferred embodiment of the present invention will be described with reference to the related drawings. The components of the present invention will be described by the same reference symbols. As shown in FIG. 1 , steps S100 to S140 are performed in the 四 (4) position according to the preferred embodiment of the present invention. In step sio, the operation is dragged in steps. Sll〇, convert the instant image to a non-real type, which can subject the captured instant image to special processing, such as hard; = water! processing, Γ processing, embossing processing, mosaic whitening, texturing processing, oil processing, Appropriate = processing, etc., to avoid the problem of direct view itself (10), the actual image of the material, as the basis for the user to adjust the position of 201124917, the user can adjust the position to display, for example, all images of the face . At this time, the user can perform position adjustment in the one-dimensional direction, the two-dimensional direction, or the three-dimensional direction with reference to the displayed self image. Specifically, the adjustment of the one-dimensional direction is performed, for example, only in the left-right direction, the up-and-down direction, or the front-rear direction; and the two-dimensional direction is adjusted, for example, simultaneously and/or separately in the up, down, left, and right directions, the front, rear, left and right directions, or the up and down direction; The direction is simultaneous and/or left and right, up and down, and front and rear direction adjustments to reduce the time required for adjustment. In step S130, the instant image is analyzed to obtain a pupil data, and the pupil data may include a single pupil data or two pupil data simultaneously to improve the accuracy of the data analysis. The pupil data may include a pupil center and at least one reflective point data, which may be used as an analysis data by, for example, analyzing the pupil/center of the pupil/eye on the eyeball. In step S140, it is determined whether the user is located in a use position range according to the obtained pupil data, and the use position range may be, for example, within 30 degrees of the user up and down and left and right, and the front-rear distance (distance between the user and the display). 60~80cm. It should be noted that this is merely an example for convenience of understanding and is not intended to limit the present invention. In addition, the pupil alignment method may further comprise the step of providing an indication signal to guide the user to adjust the position. The indication signal can be, for example, a direction indication signal that can be displayed at other locations on the display or integrated into the non-realistic instant image to prompt the user how to adjust the position. The indicator signal can also guide the user to adjust the position by voice, sound (time length or number of sounds, etc.) or light (continuous light or flash, light, etc.). 201124917 The above-mentioned pupil alignment method can be implemented in a pupil alignment device. As shown in FIG. 3 and FIG. 3, the pupil alignment device 2 includes an image capture module 2, an image analysis module 24, a shirt image conversion module 22, a display module 23, and a control module. Group 25. The image capturing module 21 is configured to capture an instant image. In this embodiment, the image capturing module 21 is, for example but not limited to, a charge coupled device (cCD), a complementary metal oxide semiconductor (CM〇s) camera, or other limited number of i-image imaging modules 21 No. In other aspects of Dongming, the pupil image capture mode "and 21, and 夂" 丄 衮 2 can include a ^ 1 each corresponding to the user's left and right pupils. Die change module 22 and image The module 21 is connected, and the image is converted into a real-time type of real-time image. The image is connected to the image conversion module 21, and the (4) image of the real module of the module 23 is displayed as a user. The second display module 23 is, for example but not limited to, a liquid crystal display (LCD), a polar body (LED) display, an organic light emitting diode (〇led) bird: an electronic paper (e_paper) display, etc., rather than real The plastic type is displayed in full face or in full-screen mode, and the size is 乂 suitable for user reference. Month, as shown in Figure 4, if not converted, the display module 23 displays I °, The instant image P1 is the same as the instant instant noodles, is The human eye observes the same visual result. Relatively speaking, please refer to FIG. 5'. In the embodiment, the display module 23 displays the visual knot generated by the special processing turned to 201124917. The same as. 卩 (4) 彡 P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P For example, image analysis mode: 2 =: pupil data. The side of the analysis contains the pupil center, the eyeball ^ uses, for example, a preset database, and its light storage ancient point 1 P, eye pattern, black and white difference change or anti-nine 4 points, the pupil data required for the shadow. After the analysis of the knife analysis module 24, the image acquisition module 21, the image conversion module 22, and the second analysis module 24 are obtained. And the display module 23 is connected, and the control group 25 uses the pupil data to determine whether the user is in the use position _1. The criterion of the break can be based on whether the pupil data includes, for example, the pupil center and the reflective highlight, and whether the eye contour is included. Whether the complete, eye pattern matches, or... white difference i: phlegm is not significant The condition 'is compared with the preset database to determine whether the user is in the ❹ position (4). The location range has been described in detail above, and will not be described again. In the present example, the control module 25 can be used in judgment. After the position range is used, the control signal is sent to control the pupil alignment device 2, and the subsequent steps are performed, for example, the image capturing module 21 is terminated and the display interface is displayed on the display module 23. When the user determines that the user is still not in the scope of the inspection, the pupil alignment device 2 continues to assist the user in position adjustment. In this embodiment, the pupil alignment device 2 may further include at least one light source transmitting module, Referring to FIG. 3, the pupil alignment device 2 preferably includes two 11 201124917 light source emission modes, and the light source emission module 26 of the group 26'# can be, for example, infrared (a pupil-friendly reflective point is shown as a pupil alignment). Although the image capture mode is used, 21, the image conversion module 22, the display module 23, the image analysis module μ control module 25, and the light source emission module 26 are integrated into a single-electronic society for explanation, but the skill When ordinary skills in art, the pupil of 2 may be used in conjunction with existing personal computers. For example, image modulo couch "light transmission module 26 and bonded to both the personal computer display

示模組23),影像轉換模組22、影像分析模組24及控; 模組25則可實施於既有個人電腦的主機。 據此,請同時參考圖6Α及圖6Β所示,利用本 瞳孔對位方法及裝置,可將即時影像轉換為非直實^ 時影像,藉此免除使用者產生直視本身面孔的不舒適 外,原本出現在使用者眼中的異常反光點(如圖6α) 在轉換後得以消除(如圖όΒ),保障使用者的感受。 以下將以非限制性實例並配合圖7至圖;;說明The display module 23), the image conversion module 22, the image analysis module 24, and the control module 25 can be implemented on a host computer having an existing personal computer. Accordingly, please refer to FIG. 6A and FIG. 6Β at the same time, and the present method can be used to convert an instant image into a non-direct real image by using the pupil alignment method and device, thereby eliminating the user's uncomfortable feeling of directly looking at the face itself. The abnormal reflection point (Fig. 6α) originally appearing in the user's eyes is eliminated after the conversion (as shown in Fig. ,) to protect the user's feelings. The following will be taken as a non-limiting example and in conjunction with FIG. 7 to FIG.;

=置2顯示之非真實型式即時影像與使用者調整位置 間的關係。 圖7為使用者與瞳孔對位裝置2之相對位置的示竟 圖,圖8為圖7所示之狀態中曈孔對位裝置:顯示即時影 像的示意圖。請同時參考圖7及圖8所示,當使用者尚未 位於瞳孔對位裝置2的使用位置範圍時(如圖7所示), 顯示模組23巾顯示㈣影像晝面便不會出現臉部全抑 像(如圖8所示)’此時表示使时的位置並不適於操作。 12 201124917 當使用者沒有位於使用位置範圍時,使用者或辅助使 用者可藉由參照顯示模組23顯示的影像晝面,自行判斷 如何進行位置調整,或經由語音及/或聲光引導進行位置調 整。位置調整的方式已詳述於上,不再贅述。如此當使用 者進入使用位置範圍後(如圖9所示),顯示模組23顯示 的影像晝面應包含例如臉部全部影像(如圖5所示 4 至角可寸兩例說明如何進行對位,然而= Set 2 to display the relationship between the non-realistic instant image and the user's adjusted position. Fig. 7 is a view showing the relative position of the user to the pupil alignment device 2, and Fig. 8 is a view showing the pupil alignment device in the state shown in Fig. 7: displaying an instant image. Please refer to FIGS. 7 and 8, when the user is not using the para position in the pupil position range of the apparatus 2 (7), (iv) display module 23 displays the image day towel will not appear facial surface Full suppression (shown in Figure 8) 'This indicates that the position at the time is not suitable for operation. 12 201124917 When the user is not in the range of the use position, the user or the auxiliary user can determine the position adjustment by referring to the image surface displayed by the display module 23, or perform the position via voice and/or sound and light guidance. Adjustment. The way of position adjustment has been detailed above and will not be described again. Thus, when the user enters the use position range (as shown in FIG. 9), the image plane displayed by the display module 23 should include, for example, all the images of the face (as shown in FIG. 5, the angles of the two sides can be described in two examples. Bit, however

員域具有通常知識者當知’亦可赠部特徵進行對位。 二考SI 10所Τ ’依據本發明之一種臉部特徵對位方法, 二^步驟:擷取—即時影像(S2G0);轉換即時影像 影像,作為使用者非真實型式即時 時影像以^ 位置調整的依據(㈣);分析即 侍臉部特徵資料(S230);以及依據所取卩 臉部特徵資料判斷 汉佤骒所取侍 (S240)。此外,上、#、心用者疋否位於使用位置範圍 部特徵對位裝置。卩特徵對位方法亦可實施於-臉 臉部特徵對位方法 組成模組皆與圖丨至圖、'"罝,其步驟流程、執行方法或 於此不再贅述。但其中3所不瞳孔對位方法與裝置相同, 所取得的臉部特徵資料特別㊉要說明的是,影像分析模組 置、臉部凸起部位、至^包含例如臉型輪廓、五官相對位 且控制模組可依據所少^一眼球或至少一瞳孔等資料,並 否位於使用位置_。㈣部特徵#料來賴使用者是 综上所述 ^本”之瞳孔麵孔魏對位方法與 13 201124917 裝置,可將即時影像轉換為非真實型式即睹影像,並以此 作為使用者進行位置調整的依據,從而町避免在對位過程 中使用者因為直視本身面貌及/或察覺眼中出現異常的反 光點所產生料適的感覺,提高使㈣意願,此外,亦可 防止眼球追蹤的技雜諸外人窺知, 以上所述僅為舉例性,,而非為^打的運用 本發明之精神與範轉,而對1進纟。任何未脫離 應包含於後附之申請專利_^:之4效修改或變更,均The member domain has the knowledge of the general knowledge, and can also be used to match the characteristics of the department. Second test SI 10 Τ 'A facial feature aligning method according to the present invention, two steps: capture - instant image (S2G0); convert the instant image image as a user non-real type instant image to adjust position The basis of ((4)); the analysis is the facial features data (S230); and the judgment of the facial features according to the facial features data (S240). In addition, the upper, #, and the user are located in the use position range feature registration device. The 卩 feature aligning method can also be implemented in the - face facial feature aligning method. The modules are all connected to the figure, the '" 罝, the step flow, the execution method or the details are not described here. However, the three methods of non-boring alignment are the same as the device. The obtained facial features are particularly specific. The image analysis module, the convex part of the face, and the image include, for example, the contour of the face and the relative position of the facial features. The control module can be based on less than one eyeball or at least one pupil, and is not in the use position _. (4) Characteristics of the Department # The user is based on the above-mentioned method of the pupil face Wei aligning method and 13 201124917 device, which can convert the real-time image into a non-real type 睹 image, and use this as a user. The basis of the position adjustment, so that the town avoids the user's ability to look directly at the face and/or detect the abnormal reflection point in the eye during the alignment process, improve the intention of (4), and also prevent the eye tracking technology. The above is only for the sake of illustration, and not for the use of the spirit and scope of the present invention, but for any one that is not included in the attached patent _^: 4 effects modification or change, both

【圖式簡單說明】[Simple description of the map]

為依據本發明之一種曈 孔對位方法的步驟流程According to one hole as a tong-step process of the present invention a method for position

圖 圖 圖 示意圖 2為依據本發明之一種 瞳孔對位|置的系統方塊 3為圖2所示瞳孔對位裂置 4為曈孔對位裝置顯示即時 5為瞳孔對位裝置顯示非真 的示意圖; 影像的示意圖; 實型式即時影像畫面的2 is a system for the pupil alignment according to the present invention. The block 3 is the pupil alignment of the pupil shown in FIG. 2. The pupil alignment device displays the instant 5 as the pupil alignment device. ; schematic diagram of the image; real-time real-time image

所示彰像晝面之眼球 圖6A及圖6B分別為圖4及圖5 周圍的放大圖; 圖7為使用者與瞳孔對位裝置之相對位置的Μ圖, 且使用者並未位於使用位置範圍; μ 圖8為圖7所示之狀態中瞳孔對位褒置顯示非真實变 14 201124917 式即時影像的不意圖; 圖9為使用者與瞳孔對位裝置之相對位置的示意圖, •且使用者已完成位置調整;以及 圖10為依據本發明之一種臉部特徵對位方法的步驟 流程圖。 【主要元件符號說明】 2:瞳孔對位裝置 • 21 :影像擷取模組 22 :影像轉換模組 * 23 :顯示模組. - 24:影像分析模組 25 :控制模組 26 :光源發射模組 P1 :原始的即時影像 φ P2 :非真實型式即時影像 S100〜S140、S200〜S240 :步驟 156A and 6B are enlarged views of the periphery of FIG. 4 and FIG. 5 respectively; FIG. 7 is a schematic view of the relative position of the user and the pupil alignment device, and the user is not in the use position. Scope; μ Fig. 8 is a schematic view of the pupil alignment device in the state shown in Fig. 7 showing the non-reality change 14 201124917 type instant image; Fig. 9 is a schematic diagram of the relative position of the user and the pupil alignment device, and using The position adjustment has been completed; and FIG. 10 is a flow chart showing the steps of a facial feature registration method according to the present invention. [Main component symbol description] 2: Pupil alignment device • 21: Image capture module 22: Image conversion module* 23: Display module. - 24: Image analysis module 25: Control module 26: Light source emission mode Group P1: original live image φ P2: non-real type instant image S100~S140, S200~S240: step 15

Claims (1)

201124917 七 、申請專利範圍: 一種瞳孔對位枝,包含以下步驟: 擷取一即時影像; 轉換5亥即時影像為一非真實型 顯示該非真實型式即時影像;^像’ 亥即時影像以取得—瞳孔資料;以及 置範圍。孔純簡使用者是否位於-使用位 2 ==圍第1項所述之瞳孔對位方法,其_該 雕處理'心印處理、柔化處理、浮 理化處理、油晝户理、蜜處理、邊緣偵測處理、紋 像。 |处、…、白處理或灰階處理的即時影 3、 如申請專利範圍 瞳孔資料^ _ 瞳孔純^法,其中該 4、 如申請專及至少-反光點的資料。 一步驟: 4 1項料對㈣法,更包含 _使用者進行位置調整。 徵對位方法,包含 : 裯取—即時影像; =即時影像為一非真實 顯不轉真實型式即時影像;^像 依二二衫像以取得一臉部特徵資料;以及 传該臉部特徵資料判斷使用者是否位於一使 16 201124917 用位置範圍。 6、 如申請專利範圍第5項所述之臉部特徵對位方法,其 中該非真實型式即時影像為經浮水印處理、柔化.處 理、浮雕處理、馬賽克處理、描邊處理、邊緣偵測處 理、紋理化處理、油晝處理、黑白處理或灰階處理的 即時影像。 7、 如申請專利範圍第5項所述之臉部特徵對位方法,其 中該臉部特徵資料包含臉型輪廟、五官相對位置、臉 部凸起部位、至少一眼球或至少一瞳孔的資料。 8、 如申請專利範圍第5項所述之臉部特徵對位方法,更 包含一步驟: 提供一指示訊號,引導該使用者進行位置調整。 9、 一種瞳孔對位裝置,包含: 一影像擷取模組,其擷取一即時影像; 一影像轉換模組,與該影像擷取模組連接,該影像轉 換模組轉換該即時影像為一非真實型式即時影像; 一顯示模組,與該影像轉換模組連接,該顯示模組顯 示該非真實型式即時影像; 一影像分析模組,與該影像擷取模組連接,該影像分 析模組分析該即時影像以取得一瞳孔資料;以及 一控制模組,與該影像擷取模組、該影像轉換模組、 該影像分析模組及該顯示模組連接,該控制模組依 據所取得該瞳孔資料判斷使用者是否位於一使用位 置範圍。 17 201124917 10、如申請專利範圍第9項所述之瞳孔對位裝置,其中爷 11 影像擷取模組為電荷耦合元件(CCD)攝影機或互補 金屬氧化物半導體(CMOS)攝影機。 如申請專利範圍第9項所述之曈孔對位裝置,其中, 非真實型式即時影像為經浮水印處理、柔化處理、二 雕處理、馬賽克處理、描邊處理、邊緣偵測處理、: J化處理 '油晝處理、黑白處理或灰階處理的即時影 12 13 二Γ:專利範圍第9項所述之瞳孔對位裝置,其中兮 里籍二枓包含—瞳孔中心及至少一反光點的資料μ —種臉部特徵對位裝置,包含: W貝枓。 二影像操取模組,其擷取一即時影像; .衫像轉換模組,與該影㈣ 換模紐轉換該即時影像為-非直二T像轉 —顯示模組,盥哕与儋/、貫I式即時影像; 亍像轉換模組連接,該顯亍 不该非真貧型式即時影像,· 硝不拉組顯 _影像分析顧,_影 析模组分析該即時影 ::’該影像分 一控制模組,與該影像#樣模組 該影像分析模組及該續示、:μ衫像轉換模組、 據所取得該臉部特徵以接,該控制模組依 使用位置範圍。 斷使用者是否位於一 如申睛專利範圍第Η + 、斤述之臉部特徵對位裝置, 及 ^付臉部特徵資料,·以 14 201124917 其中該影像擷取模組為電荷耦合元件(CCD)攝影機 或互補金屬氧化物半導體( CMOS)攝影機。 15、 如申請專利範圍第13項所述之臉部特徵對位裝置, 其中該非真實型式即時影像為經浮水印處理、柔化處 理、浮雕處理、馬賽克處理、描邊處理、邊緣偵測處 理、紋理化處理、油晝處理、黑白處理或灰階處理的 即時影像。 16、 如申請專利範圍第13項所述之臉部特徵對位裝置, 其中該臉部特徵資料包含臉型輪廓、五官相對位置、 臉部凸起部位、至少一眼球或至少一瞳孔的資料。 19201124917 VII. Patent application scope: A pupil alignment branch, comprising the following steps: capturing a real-time image; converting a 5 hai instant image into a non-real type display of the non-real type instant image; ^ like 'Hai instant image to obtain 瞳 pupil Information; and scope. Whether the hole-simplified user is located - use bit 2 == around the pupil alignment method described in item 1, the _ the carving process 'heart print processing, softening treatment, floatation treatment, oil 昼 household, honey treatment, edge Detection processing, pattern. Instant copy of |, white processing or grayscale processing 3. If the patent application scope is 瞳 pupil data ^ _ 纯 纯 pure ^ method, which 4, such as applying for special and at least - reflective point information. One step: 4 1 item pair (4) method, including _ user position adjustment. The registration method includes: capturing - instant image; = instant image is a non-realistic real-time instant image; ^ like a two-two shirt image to obtain a facial feature data; and transmitting the facial feature data Determine if the user is in a range of locations that make 16 201124917. 6. The face feature aligning method according to claim 5, wherein the non-real type instant image is watermarked, softened, processed, embossed, mosaic processed, stroked, and edge detected. Instant imagery for texturing, oil processing, black and white processing or grayscale processing. 7. The facial feature aligning method according to claim 5, wherein the facial feature data comprises a face-shaped temple, a relative facial position, a convex part of the face, at least one eyeball or at least one pupil. 8. The face feature aligning method according to item 5 of the patent application scope further includes a step of: providing an indication signal to guide the user to perform position adjustment. 9. A pupil alignment device comprising: an image capture module for capturing an instant image; an image conversion module coupled to the image capture module, wherein the image conversion module converts the instant image into a a non-realistic real-time image; a display module coupled to the image conversion module, the display module displaying the non-realistic type of instant image; an image analysis module coupled to the image capture module, the image analysis module And the control module is connected to the image capture module, the image conversion module, the image analysis module, and the display module, and the control module is configured according to the The pupil data determines whether the user is in a range of use positions. The method of claim 9, wherein the image capturing module is a charge coupled device (CCD) camera or a complementary metal oxide semiconductor (CMOS) camera. The pupil alignment device according to claim 9, wherein the non-real type instant image is subjected to watermark processing, softening processing, second carving processing, mosaic processing, stroke processing, edge detection processing, and the like: J-processed instant processing of oil sputum treatment, black-and-white processing or gray-scale processing 12 13 Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ 对 对 对 对 对 对 对 对 对 对 对 对 对 对 对 对 对 对 对The data μ is a facial feature aligning device, comprising: W Bellow. The second image capture module captures an instant image; the shirt image conversion module, and the shadow (4) die change button converts the live image into a non-straight two T image turn-display module, 盥哕 and 儋/ , I-type instant image; 亍 image conversion module connection, the display should not be non-true poor type of real-time image, · Niubula group display _ image analysis Gu, _ image analysis module analysis of the instant image:: 'The The image is divided into a control module, and the image analysis module and the continuation, the μ-shirt image conversion module, and the facial feature are obtained, and the control module is in accordance with the use position range. . Whether the user is located in the 专利 专利 、 、 、 、 、 、 、 、 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 脸 2011 2011 2011 2011 2011 2011 2011 2011 2011 2011 2011 A camera or a complementary metal oxide semiconductor (CMOS) camera. 15. The face feature aligning device according to claim 13, wherein the non-real type instant image is watermarked, softened, embossed, mosaic processed, stroked, edge detected, Instant imagery for texturing, oil processing, black and white processing or grayscale processing. The face feature aligning device of claim 13, wherein the facial feature data comprises a face contour, a relative facial position, a facial convex portion, at least one eyeball or at least one pupil. 19
TW099101126A 2010-01-15 2010-01-15 Alignment method and alignment apparatus of pupil or facial characteristics TWI447659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW099101126A TWI447659B (en) 2010-01-15 2010-01-15 Alignment method and alignment apparatus of pupil or facial characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW099101126A TWI447659B (en) 2010-01-15 2010-01-15 Alignment method and alignment apparatus of pupil or facial characteristics

Publications (2)

Publication Number Publication Date
TW201124917A true TW201124917A (en) 2011-07-16
TWI447659B TWI447659B (en) 2014-08-01

Family

ID=45047273

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099101126A TWI447659B (en) 2010-01-15 2010-01-15 Alignment method and alignment apparatus of pupil or facial characteristics

Country Status (1)

Country Link
TW (1) TWI447659B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2811369A1 (en) 2013-06-03 2014-12-10 Utechnzone Co., Ltd. Method of moving a cursor on a screen to a clickable object and a computer system and a computer program thereof
TWI507911B (en) * 2014-02-25 2015-11-11 Utechzone Co Ltd Authentication system controlled by eye open and eye closed state and handheld control apparatus thereof
US10049271B2 (en) 2013-12-27 2018-08-14 Utechzone Co., Ltd. Authentication system controlled by eye open and eye closed state, handheld control apparatus thereof and computer readable recording media
CN111488775A (en) * 2019-01-29 2020-08-04 财团法人资讯工业策进会 Device and method for judging degree of fixation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI274277B (en) * 2005-10-24 2007-02-21 Inventec Appliances Corp Speech prompt system and method thereof
JP2007206833A (en) * 2006-01-31 2007-08-16 Toshiba Corp Biological collation method and device
DE602007010523D1 (en) * 2006-02-15 2010-12-30 Toshiba Kk Apparatus and method for personal identification
TW200928892A (en) * 2007-12-28 2009-07-01 Wistron Corp Electronic apparatus and operation method thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2811369A1 (en) 2013-06-03 2014-12-10 Utechnzone Co., Ltd. Method of moving a cursor on a screen to a clickable object and a computer system and a computer program thereof
US10049271B2 (en) 2013-12-27 2018-08-14 Utechzone Co., Ltd. Authentication system controlled by eye open and eye closed state, handheld control apparatus thereof and computer readable recording media
TWI507911B (en) * 2014-02-25 2015-11-11 Utechzone Co Ltd Authentication system controlled by eye open and eye closed state and handheld control apparatus thereof
CN111488775A (en) * 2019-01-29 2020-08-04 财团法人资讯工业策进会 Device and method for judging degree of fixation
CN111488775B (en) * 2019-01-29 2023-04-28 财团法人资讯工业策进会 Device and method for judging degree of visibility

Also Published As

Publication number Publication date
TWI447659B (en) 2014-08-01

Similar Documents

Publication Publication Date Title
WO2019196133A1 (en) Head-mounted visual aid device
TWI747372B (en) Method of and imaging system for clinical sign detection
US20210161449A1 (en) Method and system for correlating an image capturing device to a human user for analysis of cognitive performance
CN106056092B (en) The gaze estimation method for headset equipment based on iris and pupil
Peterson et al. Individual differences in face-looking behavior generalize from the lab to the world
JP2022008583A (en) Augmented reality system and method that utilize reflection
WO2016165052A1 (en) Detecting facial expressions
US7291106B2 (en) Diagnostic system and portable telephone device
Otero-Millan et al. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion
JP2007267802A (en) Examination system, training system and visual sense information presenting system
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
JP2007293818A (en) Image-recording device, image-recording method, and image-recording program
US20220361744A1 (en) Systems and methods for evaluating pupillary responses
EP3164055A1 (en) Eye condition determination system
TW201124917A (en) Alignment method and alignment apparatus of pupil or facial characteristics
CN107115100A (en) A kind of Intelligent mirror device detected for medical treatment & health and its detection method
TWI515609B (en) Eyeball locating method and system
CN108446011A (en) A kind of medical householder method and equipment based on augmented reality
JP2010239583A (en) Communication terminal unit, method of controlling communication of communication terminal unit, and communication control program
TWI288628B (en) Method and apparatus of communication through eye movements and analysis method of communication through eye movements
Bhatia et al. DUMask: A Discrete and Unobtrusive Mask-Based Interface for Facial Gestures
CN105511077A (en) Head-mounted intelligent device
CN205229570U (en) Head -wearing type intelligent device
JP2017189498A (en) Medical head-mounted display, program of medical head-mounted display, and control method of medical head-mounted display
Rupanagudi et al. A further simplified algorithm for blink recognition using video oculography for communicating