TW201207702A - Method for determining positions of touch points on an optical touch panel - Google Patents

Method for determining positions of touch points on an optical touch panel Download PDF

Info

Publication number
TW201207702A
TW201207702A TW99126732A TW99126732A TW201207702A TW 201207702 A TW201207702 A TW 201207702A TW 99126732 A TW99126732 A TW 99126732A TW 99126732 A TW99126732 A TW 99126732A TW 201207702 A TW201207702 A TW 201207702A
Authority
TW
Taiwan
Prior art keywords
image
point
area
camera unit
real
Prior art date
Application number
TW99126732A
Other languages
Chinese (zh)
Other versions
TWI423099B (en
Inventor
Chun-Jen Lee
Lung-Kai Cheng
Te-Yuan Li
Original Assignee
Qisda Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qisda Corp filed Critical Qisda Corp
Priority to TW99126732A priority Critical patent/TWI423099B/en
Publication of TW201207702A publication Critical patent/TW201207702A/en
Application granted granted Critical
Publication of TWI423099B publication Critical patent/TWI423099B/en

Links

Landscapes

  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A first image capturing device and a second image capturing device capture first and second real point images each including an image of a plurality of touch points on an indication region of an optical touch panel. The first and the second real point images outline a plurality of candidate regions on the indication region; one of the plurality of candidate regions is selected to be the verifying region. A captured image characteristic is obtained by capturing a touch point image from the verifying region through a mirror of the optical touch panel, and whether the verifying region corresponds to one of the plurality of touch points is determined according to the captured image characteristic.

Description

201207702 六、發明說明: 【發明所屬之技術領域】 種判斷光學觸控螢幕 實 本發明係有關於一種光學觸控螢幕,尤指一 際被觸控的位置之方法。 【先前技術】 鲁 隨著觸控技術日趨成熟,擁有大尺寸顯示面板與多點觸控技術的 觸控裝置已逐漸成為市場的主流,並廣泛應用於各式電子產品中, 例如自動櫃員機、手持式電子裝置以及顯示器。一般說來,觸 f技術可分為雜式、電容式以及紳式三種。其巾電阻式和_ 式觸控螢幕技術係_倾接_職置時,造摘 =變化而定位物件。光學式觸控螢幕技侧係利恥件在觸3 移動時,造成光路徑阻斷或是光影變化而定位物件。相較 籲等優I,林式靜技雜雜低成本財施枝相對單純 由於光學摘辟幕麟絲对置的生產 f與元件,實财_幽,絲 較電阻式和電容式歸魏。細,絲 ,成本 控的情況下,容易因為多條光路徑被阻斷而 ^在夕點觸 控點的誤判。賴,先她㈣點導致觸 觸控點’但是如編嶋坏織的 201207702 請參照第1圖,第1圖係說明光學式觸控螢幕在多點觸控的情況下, 因為多條光路徑被阻斷所形成的虛解(細p〇int)觸控點之示意 圖士第1圖所示’光源LG1、LG2所發射的光路徑被實解㈣㈣扣) 觸控點OP卜〇P2阻斷後’先前技術的光學式觸控縣因為僅判斷 光路徑是__(沒有判斷制物件的_),所以除了決定出實 綱控點0P1、OP2之外,另外亦會因被阻斷的光路徑之共構交點 產生虛解觸控點GP1、GP2。 由於有虛解觸控點的存在’因此當先前技術之光學式觸控螢幕上 在多點觸控即同時有兩個或兩舰上之實觸控輯,先前技術之 光學式觸控螢幕即可能會有馳點侧錯誤之情形發生,造成使用 者使用上的困擾。 【發明内容】 本發明之一實施例提供一種判斷複數個觸控輸入點觸碰光學觸 控螢幕之指示區域實際被觸控的位置之方法。該光學觸控螢幕包含 -第-攝像單元、-第二攝像料、—發光模組及—鏡面。第—攝 像單元及該第二攝像單元斜向地向該指示區域擷取影像。發光模組 用以將光導人該指示區域内再被該第-與第二攝像單域測。鏡面 ,相對該I與第二攝像單元設置。該綠包含:使賴第一攝像 單元向該指示區域齡影像產生-第-實點影像;使贱第二攝像 單元向該指示區域擷取影像產生一第二實點影像,其中該第一實點 衫像與該第二實點影像對應該指示區域產生複數個候選區域;於該 201207702 複數個候選區域中選取一待偵測區域;透過該鏡面於該待偵測區域 擷取觸控輸入點影像產生一操取影像特徵,並根據該操取影像特徵 判斷該待侧區域衫實際對賴複數_控輸人點其中之一。 本毛月之另例提供一種判斷複數侧控輸入點觸碰光學 觸控螢幕之才曰不區域實際被觸控的位置之方法。該光學觸控勞幕包 含-,-攝像單元、一第二攝像單元、一發光模組及一鏡面。第一 攝像早7C及該第二攝像單元斜向地向該指示區域娜影像。發光模 組用以將光導入該指示區域内再被該第一與第二攝像單元感測。鏡 面係,對該第-與第二攝像單元設置。該方法包含:⑻使用該第一 攝像單元向該指示區域擷取影像產生一第一實點影像;⑼使用該第 一攝像單元透過該鏡面向該指域擷取影像產生—第一虛點影 像’(C)使用5亥第二攝像單元向該指示區域擷取影像產生一第二實點 了像、/、巾4第#像與該S二影像對應該指示區域產生複數固候 選區域;(d)滅該複數健親域巾至少-區域產生—實點可能分 Φ佈區域;(e)根據該實點可能分佈區域對應該鏡面產生一第一重建影 像,該第-實點影像與該第一重建影像產生一第一虛點重建影像; (f)根據該第-虛點影像與該第—虛點重建影制斷該實點可能分佈 區域是否對應複數個觸控輸入點至少一。 因此’利用本發明所提供之方法可排除多點觸控情況下所導致之 虛解觸控點’並正確判斷光學難螢幕實際财_控的位置。 201207702 【實施方式】 一。月 > 考第2 第2圖係為本發明之光學觸控螢幕2之一實施例 之7Γ意圖光予觸控螢幕2包含一指示區域2〇、一第一攝像單元22、 匕第。攝像單元24、—鏡面26、一發光模組及一處理單元%。 心示區或2〇係用以提供乡個待測物件接觸,測物件接觸指示區域 20之位置即為觸控輸人點。在本實施例中,指示區域如由左緣2〇2、 下、·彖204右緣206及上緣208所定義。左緣2〇2及上緣2〇8形成 左上隅角(:卜右緣2〇6及上緣208形成一右上隅角C2 ,而上緣 208及了緣204相對。第一攝像單元22設置於左上隅角而第 -攝像單it 24⑦置右上隅肖C2,分別斜向地向指示區域2謹取影 像’舉例來說’第-攝像單元22係向對應下緣綱及右緣挪之範 關取影像,而第-攝像單元22係向對應左緣2()2及下緣綱之範 圍操取影像。當待測物件接觸指示區域2〇時,對應下緣綱及右緣 206之發光模組27所發出之部分光線被待測物件阻擒,故於第一攝 像單疋22產生-第一實點影像n ;同樣地’對應左緣嫩及下緣 204之發光模組27所發出之部分光線被待測物件阻擔,於第二攝像 單元24產生-第二實點影像12。發光模組27係用以將光科入指 示區域2G内以使第-及第二攝像單元22、24擷取第—實點影像:丄 及第二實點影像12。鏡面26設置之位置係相對於第—與第二攝像 單元22、24,如下緣204,以於反射指示區域2〇、第一攝像單元22 及第二攝像單元24之鏡面對稱位置產生鏡像指示區域2〇,、第一鏡 像攝像單元22’及第二鏡像攝像單元24’。處理單元28耦接於第一 攝像單元22及第二攝像單元24 ’用來處理第一實點影像u及第二 201207702 實點影像12。 當複數個制物件接觸指示區域2()時,處理單元Μ可藉由控制 發光模組27來使帛—及第二攝料元22、24娜第_及第二實點 影像η、π。假設接觸指示區域20之待測物件數目為2,而其接 觸指不區域 像η、12 + ’觸控輸人點〇卜〇2會分別對應於第—攝像單元22 鏡頭上之觸控輸人點影像Pa、Pb,以及第二攝像單元24鏡頭上之 觸控輸入點影像Pe、Pd。處理料28根據—預設之角度陣列(angle 福e)將觸控狀轉像Pa、Pb、Pe、Pd分賴算驗㈣、n d處理單元28在第一實點影上根據失角⑹、此產生複數個 第〜像區間Α、Β。處理單元28在第二實點影像12上根據夾角此、 d產生,數個第二影像區間c、D。觸控輸人點⑺、⑺可能所在 =候選區域AC、BD、BC、AD係由複數個第一影像區間a、B與 複數個第—影像區間c、D聯集產生。處理單元Μ可利用三角測量 方式根據第一、第二攝像單元22、24之位置以及夾角知、扑、此、 如分別計算出候選區域AC、BD、BC、AD之位置。 及第一攝像單元22、24可為線性感測器(iinear sens〇r)或區 域感測器(area sens〇r),但不限於此。光學觸控螢幕2之發光模組27 可為回射器(retroreflector)或導光板(light guard)所實現,但不限於 此舉例來說’若光學觸控螢幕2的發光模組27係由導光板所實現 時’發光模組27包含一上導光板,設於指示區域2〇之上緣208 ; 201207702 及右緣2光板及—第二側導光板,設於指示區域20之左緣202 及右緣206 ; —下邀止心 元28致㈣桃,設於指報域2G之下緣綱。當處理單 月匕帛攝像早元22以操取第一實點 幕2開啟下、缘204之τ道丄1 之下導先板及右緣2〇6之第二側導光板,下導光 =-側導光板係於相異日細啟。當處理單㈣致能第一攝像 I: i過其鏡面對稱位置(等同於利用第一鏡面攝像單元擷 二日、:光學觸控營幕⑽啟上緣2〇8之上導光板及右緣施之 -=導光板’而上導光板及第二解光板係於相異時段開啟。當 處理早7L 28致能第二攝像單元以擷取第二實點影像12時,光學觸 控螢幕2開啟下緣2〇4之下導光板及左緣2〇2之第一側冑光板,下 導光板及第-麟光板係於相異時段開啟。該理單元Μ致妒第一 攝像較24透過其鏡面對齡置(制於顧第—鏡面攝像單元… 摘取影像時,光學觸控螢幕2開啟上緣應之上導光板及右緣施 之第二側導光板’社導光板及第二側導光板係於相異時段開啟。 請參考第3圖。第3 _為第2圖之光學觸控$幕2之發光模組 27為回射器時之-實施例之示意圖。第3圖之光學觸控螢幕2相似 於第2圖,不同的是,於本實施例巾第__及第二攝像單元^% 為區域感測器,發光模組為回射器RR,而第一及第二攝像單元U、 24上分別設置有-光源。如光學觸控螢幕2之鳥嗽圖⑷所示,指 示區域20之上緣208、左緣202、及右緣206分別設置有上下兩層 之回射器RR ;指示區域20之下緣204設置有重疊之鏡面%及回曰 射器RR,其中鏡面26重疊於設置於下緣2〇4之回射器跋之上。 201207702 如光學觸控勞幕2之剖面_所示,第 像單元24之下半部分相對於鏡面26,用 _ 2()2及右緣2〇6之下層之回射請回 射之先線’以及位於下緣2〇4之下層之鏡面%回射之光線。 角^車H41。第4圖係為說明第2圖之處理單元28根據預設 單元22之觸控輸入點影像pa及第二攝像單元 Μ之觸控輸人點影像Pe分別換算為夾㈣★之示意圖。如第4 =示’第-攝像單元22上之觸控輸入點影像_應第一攝像單 =2上具有像素起始位置Pa_UPa_e,而第二攝像單元以之觸 控輸入點雜Pe _、第二攝料元24上具雜素起始位置& s PC'6 〇 4ί1^ 28 ^ table)a^ :a—s之位置取仔一夾角—’夾角—對應自上緣雇至像素 =rpa-s之—角度;處料元28彻職之角度_根據像 喜〇位置Pa一s取得一夾角如—e,夾角知—e對應自上緣娜至像 位置Pa一e之一角度。夾角知—s與知之差異為對應第一攝 像早W2上之觸控輸人點影像pa之夾㈣。同理,處理單元Μ 利用預設之角度陣列根據觸控輸人點影像&之像纽始位置pc—s 及Pc_e之位置分別取得夾角知—s及^〇而夾㈣—s與夾角— 之差異為對應第二攝像單元24上之觸控輸人點影像&之夾角此。 處理單元28可利用三角測量方式根據夾角知之角平分線及夹角处 201207702 之角平分線來計算出其交點,即候選區域AC之中心點(xc,yc)。其 餘夾角及候選區域之中心點等計算以此類推。 请參考第5圖’第5圖係為說明本發明之判斷光學觸控螢幕2 之1曰不區域20實際被觸控的位置之方法5之流程圖。第5圖之方法 係藉由第2圖所示之光學觸控螢幕2說明,其步驟詳述如下: 步驟500 .使用第一攝像單元22向指示區域2〇娜影像以產生第 一實點影像II ; 步驟5〇2 ·使用第二攝像單元μ向指示區域2〇擷取影像以產生第 二實點影像12 ; 步驟504 .根據第-實轉仙與第二實點影㈣,產生複數個 候選區域,在第二圖實施例中例如ac、bd、bc、ad ; 步驟506 :選取複數個候選區域AC、BD、Bc、ad中之一為待偵 測區域; 步驟 透過鏡面26於待細彳區域擷取觸控輸人點影像,以產 生一擷取影像特徵; 步驟51〇 :根據該擷取影像特徵來判斷待偵測區域是否實際對應 待測物件之觸控輸入點Ο卜02其中之一。 本發月之方法在較佳實施例中係產生複數個候選區域,再計算 母-候選區域被第:鏡像攝像單元22,或/及第二鏡像攝像單元% 所榻取之祕所結之比例,以決定糖輸人點◦卜位於每一 201207702 個候選區域之機率。每一候選區域中第一鏡像攝像單元^,或/及第 二鏡像攝像單元24’所娜之影像所涵蓋之比例㈣應於步驟· =掏取影像特徵。舉例來說,請參考第6圖,第6圖係為說明於計 算第2圖中之-候選區域AC被第—鏡像攝像單元22,所操取之外象 所涵蓋之比例之示意圖。如第6圖所示,以其中—候選區域^做 為-待偵測區域說明,複數觸控輸入情況下具有複數個候選區域, 候選區域AC具四端點⑷,yl)、(x2, y2)、(χ3, y3)、(χ4,州。根據第 修-鏡像攝像單元22’之位置以及候選區域AC之兩端點,利用三角函 式如喊ngent產生一第一預期夾角〜叫ac,再利用角度陣列 (angle table)可得-第—預祕素長度ρι Εχ_^。第—職像素長 度PI Expeeted—AC於第一鏡像攝像單元22,上具有像素起始位置ρι201207702 VI. Description of the Invention: [Technical Field of the Invention] The invention relates to an optical touch screen. The present invention relates to an optical touch screen, and more particularly to a method of being touched. [Prior Art] With the maturity of touch technology, touch devices with large-size display panels and multi-touch technology have gradually become the mainstream of the market, and are widely used in various electronic products, such as ATMs, handheld devices. Electronic devices and displays. In general, the touch f technology can be divided into three types: hybrid, capacitive, and squat. Its towel resistance and _ touch screen technology system _ 接 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The optical touch screen technology side is used to position the object when the touch 3 moves, causing the light path to block or the light and shadow to change. Compared with Yu Yu, I have a relatively low-cost financial system, which is relatively simple. Because of the optical production, the production of f and components, the real money _ sec, the wire is more resistive and capacitive. In the case of fine, silk, and cost control, it is easy to cause a misjudgment of the touch points at the eve point because a plurality of light paths are blocked. Lai, first she (four) points to touch the touch point 'but 201207702 as edited badly. Please refer to Figure 1, the first picture shows the optical touch screen in the case of multi-touch, because of multiple light paths The virtual solution (fine p〇int) formed by the block diagram of the touch point shown in Figure 1 'The light path emitted by the light source LG1, LG2 is actually solved (four) (four) buckle) Touch point OP dip P2 block After the 'previously technical optical touch county, because only the light path is judged to be __ (the _ of the object is not judged), in addition to the actual control points 0P1 and OP2, the blocked light is also The co-construction intersection of the paths produces virtual touch points GP1, GP2. Because of the existence of virtual touch points, when the multi-touch on the prior art optical touch screen has two or two real touches at the same time, the prior art optical touch screen is There may be a situation where the side-by-side error occurs, causing confusion for the user. SUMMARY OF THE INVENTION An embodiment of the present invention provides a method for determining a position at which a plurality of touch input points touch an indication area of an optical touch screen that is actually touched. The optical touch screen comprises a - camera unit, a second camera, a light module and a mirror surface. The first image capturing unit and the second image capturing unit obliquely capture images from the pointing area. The light-emitting module is configured to measure the light guide in the indication area by the first and second camera. The mirror surface is disposed opposite to the I and the second camera unit. The green color includes: causing the first camera unit to generate a first-real image to the indication area image; and causing the second camera unit to capture an image to the indication area to generate a second real image, wherein the first real image A plurality of candidate regions are generated in the pointing area corresponding to the second real image, and a to-be-detected area is selected from the plurality of candidate regions in the 201207702; the touch input point is captured through the mirror in the to-be-detected area The image is generated by taking an image feature, and determining, according to the acquired image feature, one of the actual number of the shirts in the to-be-side region. Another example of this month provides a way to determine where the plurality of side-control input points touch the optical touch screen and the area where the area is actually touched. The optical touch screen comprises -, - an image capturing unit, a second camera unit, a lighting module and a mirror surface. The first imaging early 7C and the second imaging unit obliquely point to the indication area. The illumination module is configured to direct light into the indication area and be sensed by the first and second camera units. The mirror system is provided for the first and second camera units. The method includes: (8) using the first camera unit to capture an image from the indication area to generate a first real point image; (9) using the first camera unit to capture an image through the mirror toward the finger field to generate a first virtual point image '(C) using the 5th second camera unit to capture an image from the indication area to generate a second real point image, /, towel 4# image and the S 2 image corresponding to the region to generate a plurality of solid candidate regions; d) extinguishing the complex health zone towel at least - region generation - the real point may be divided into Φ cloth regions; (e) generating a first reconstructed image corresponding to the mirror surface corresponding to the real point possible distribution region, the first real image and the The first reconstructed image generates a first virtual point reconstructed image; (f) determining, according to the first virtual point image and the first virtual point reconstruction image, whether the real point possible distribution area corresponds to at least one of the plurality of touch input points. Therefore, the method provided by the present invention can eliminate the virtual touch point caused by the multi-touch situation and correctly determine the position of the optical hard screen. 201207702 [Embodiment] One. Month > Test No. 2 FIG. 2 is an embodiment of the optical touch screen 2 of the present invention. The light-sensitive touch screen 2 includes an indication area 2A, a first image pickup unit 22, and a first image unit. The camera unit 24, the mirror surface 26, a light-emitting module and a processing unit%. The heart area or the 2 line is used to provide contact with the object to be tested, and the position of the object touching the indication area 20 is the touch input point. In the present embodiment, the indication area is defined by the left edge 2, 2, the bottom, the right edge 206, and the upper edge 208. The left edge 2〇2 and the upper edge 2〇8 form a left upper corner (the right edge 2〇6 and the upper edge 208 form a right upper corner C2, and the upper edge 208 and the edge 204 are opposite. The first camera unit 22 is disposed. On the upper left corner and the first-photographing unit it 247 is placed on the upper right side of the C2, and the image is taken obliquely to the indication area 2, for example, the first camera unit 22 is moved to the corresponding lower edge and right edge. The image is taken off, and the first camera unit 22 performs an image on the range corresponding to the left edge 2 () 2 and the lower edge. When the object to be tested contacts the indication area 2 , the light corresponding to the lower edge and the right edge 206 Part of the light emitted by the module 27 is blocked by the object to be tested, so that the first camera unit 22 generates a first real image n; likewise, the corresponding light source module 27 corresponding to the left edge and the lower edge 204 is emitted. A part of the light is blocked by the object to be tested, and a second real image 12 is generated by the second image capturing unit 24. The light emitting module 27 is configured to enter the light into the indication area 2G so that the first and second camera units 22 And capturing the first-real image: the second real image 12. The mirror 26 is disposed at a position relative to the first and second camera units 22 24, the edge 204, the mirror-symmetric position of the reflective indication area 2, the first imaging unit 22, and the second imaging unit 24 generates a mirror indication area 2, the first mirror image unit 22' and the second mirror image unit 24'. The processing unit 28 is coupled to the first camera unit 22 and the second camera unit 24' for processing the first real image u and the second 201207702 real image 12. When a plurality of objects contact the indication area 2 () The processing unit 帛 can control the illuminating module 27 to make the 帛—and the second photographic element 22, the second _ and the second real image η, π. It is assumed that the number of objects to be tested in the contact indicating area 20 is 2, and the contact pointing area is like η, 12 + 'touch input point 〇 〇 2 will correspond to the touch input point image Pa, Pb on the lens of the first camera unit 22, and the second camera unit 24 touch input point images Pe, Pd on the lens. The processing material 28 separates the touch-like images Pa, Pb, Pe, Pd according to the preset angle array (angle), and the nd processing unit 28 according to the lost angle (6) on the first real point shadow, this generates a plurality of ~ image interval Α, The processing unit 28 generates a plurality of second image intervals c and D according to the angles d and D on the second real image 12. The touch input points (7), (7) may be located = candidate regions AC, BD, BC, AD The plurality of first image sections a and B are combined with the plurality of first image sections c and D. The processing unit Μ can use the triangulation method to know the position of the first and second imaging units 22 and 24 and the angle. For example, the positions of the candidate regions AC, BD, BC, and AD are respectively calculated. The first camera units 22 and 24 may be line sensors (area sens〇r) or area sensors (area sens〇r). , but not limited to this. The light-emitting module 27 of the optical touch screen 2 can be implemented as a retroreflector or a light guard, but is not limited to this example, if the light-emitting module 27 of the optical touch screen 2 is guided by When the light panel is implemented, the light-emitting module 27 includes an upper light guide plate disposed on the upper edge 208 of the indication area 2; the 201207702 and the right edge 2 light plate and the second side light guide plate are disposed on the left edge 202 of the indication area 20 and Right edge 206; - The next invitation to the heart of the 28 to (four) peach, set in the lower margin of the 2G. When processing the single-month 匕帛 camera early element 22 to operate the first real point screen 2 to open, the second side light guide plate of the guide plate and the right edge 2〇6 under the τ channel 丄1 of the edge 204, the lower light guide =- Side light guides are attached to different days. When processing a single (four) enables the first camera I: i to pass its mirror symmetrical position (equivalent to using the first mirror camera unit for two days: the optical touch screen (10) upper edge 2〇8 above the light guide plate and the right edge Applying -= light guide plate' and the upper light guide plate and the second light-emitting plate are opened at different time periods. When processing the early 7L 28 enabling the second camera unit to capture the second real-point image 12, the optical touch screen 2 The light guide plate at the lower edge 2〇4 and the first side calender plate at the left edge 2〇2 are opened, and the lower light guide plate and the first-light plate are opened at different time periods. The mirror is facing the age (made in the Gudi-mirror camera unit... When the image is taken, the optical touch screen 2 is opened on the upper edge of the light guide plate and the right edge is applied to the second side light guide plate. The two side light guide plates are opened at different time periods. Please refer to Fig. 3. The third embodiment is a schematic diagram of the embodiment when the light module 27 of the optical touch screen 2 of Fig. 2 is a retroreflector. The optical touch screen 2 of the figure is similar to the second figure, except that in the embodiment, the __ and the second camera unit ^% are area sensors, and the light is emitted. The module is a retroreflector RR, and the first and second camera units U, 24 are respectively provided with a light source. As shown in the bird's eye diagram (4) of the optical touch screen 2, the upper edge 208 and the left edge of the indication area 20 are provided. 202 and the right edge 206 are respectively provided with the upper and lower layers of the retroreflector RR; the lower edge 204 of the indication area 20 is provided with the overlapping mirror % and the retroreflector RR, wherein the mirror surface 26 is overlapped on the lower edge 2〇4 Above the retroreflector 2012 201207702 As shown in the section _ of the optical touch screen 2, the lower half of the image unit 24 is opposite to the mirror 26, with _ 2 () 2 and the lower edge 2 〇 6 under the layer The retrospective is the first line of the retroreflection and the mirrored retroreflective light at the lower edge of the lower edge 2〇4. The corner is H41. The fourth figure is the processing unit 28 according to the second figure according to the preset unit 22. The touch input point image pa and the touch input point image Pe of the second camera unit are respectively converted into a schematic diagram of the clip (four) ★. If the 4th = 'the touch input point image on the first camera unit 22 _ should be A camera single = 2 has a pixel starting position Pa_UPa_e, and the second camera unit has a touch input point impurity Pe _, and the second shooting element 24 has a complex Position & s PC'6 〇4ί1^ 28 ^ table)a^ : a-s position takes an angle - 'angle angle - corresponding to the angle from the upper edge to the pixel = rpa-s - angle; The angle of the job _ according to the likes of the magpie position Pa s to obtain an angle such as - e, the angle of the angle - e corresponds to the angle from the upper edge to the image position Pa - e. The angle between the knowledge - s and know the difference for the first camera Early W2 touch input point image pa clip (four). Similarly, the processing unit Μ using the preset angle array according to the touch input point image & image start position pc-s and Pc_e respectively obtained The difference between the angle of the contact point and the angle of the clip (four)-s and the angle is the angle corresponding to the image of the touch input point on the second camera unit 24. The processing unit 28 can calculate the intersection point, that is, the center point (xc, yc) of the candidate area AC, by using the triangulation method according to the angle bisector of the angle and the angle bisector of the 201207702 angle. The calculation of the remaining angle and the center point of the candidate area and so on. Please refer to FIG. 5'. FIG. 5 is a flow chart showing a method 5 for determining the position of the optical touch screen 2 in which the area of the optical touch screen 2 is actually touched. The method of FIG. 5 is illustrated by the optical touch screen 2 shown in FIG. 2, and the steps are as follows: Step 500. Using the first imaging unit 22 to image the indication area 2 to generate the first real image II; Step 5〇2: Using the second imaging unit μ to capture an image to the indication area 2 to generate a second real-point image 12; Step 504. According to the first-real conversion and the second real-point shadow (four), multiple numbers are generated. a candidate region, in the second embodiment, for example, ac, bd, bc, ad; Step 506: selecting one of the plurality of candidate regions AC, BD, Bc, and ad as the to-be-detected region; The touch area input image is captured to generate a captured image feature; Step 51: According to the captured image feature, it is determined whether the area to be detected actually corresponds to the touch input point of the object to be tested. one. In the preferred embodiment, the method of the present month generates a plurality of candidate regions, and then calculates the ratio of the secrets of the parent-candidate region to the mirror image capturing unit 22, or/and the second mirror image capturing unit. In order to determine the probability that the sugar input point is located in each 201207702 candidate area. The proportion (4) covered by the image of the first mirror image unit ^, or / and the second mirror image unit 24' in each candidate region should be taken in step = = image feature. For example, please refer to Fig. 6. Fig. 6 is a schematic diagram for explaining the ratio of the candidate region AC to the image-capturing unit 22, which is taken in the second figure. As shown in FIG. 6 , the candidate region ^ is used as the to-be-detected region, and the plurality of touch regions have a plurality of candidate regions, and the candidate region AC has four endpoints (4), yl), (x2, y2). ), (χ3, y3), (χ4, state. According to the position of the repair-mirror camera unit 22' and the two end points of the candidate area AC, a trigonometric function such as shouting ngent is used to generate a first expected angle ~ called ac, Reusing an angle table obtains a -pre-premature length ρι Εχ_^. The first pixel length PI Expeeted-AC is on the first mirror image unit 22, and has a pixel start position ρι

Expected_AC_s、PI Expected_AC E。於一較佳實施例中,候選區域AC之兩 端點為候選區域AC之實質上左右兩端點(χ2, y2)及(χ4,㈣。接著, 第-鏡像攝像單元22’於第-預期像素長度ρ1ε__範圍内觀測 是否有觸控輸入點影像產生,於本實施例為第一觀測像素長度?1 • Observed—AC’利用預設之角度陣列’可將第一觀測像素長户Expected_AC_s, PI Expected_AC E. In a preferred embodiment, the two end points of the candidate area AC are the substantially left and right end points (χ2, y2) and (χ4, (4) of the candidate area AC. Then, the first-mirror imaging unit 22' is in the first-expected Whether the touch input point image is generated in the range of the pixel length ρ1 ε__, in this embodiment is the first observed pixel length? 1 • Observed-AC' uses the preset angle array 'to make the first observed pixel long

^ ^ r 1 〇bserved_AC 轉換為第觀測炎角01〇bserved AC。舉例來說,第一觀測像素長度PI Observed_AC 於第一鏡像攝像單元22’上之像素起始位置pi Observed_AC_S、P1 〇bserved_AC_E可利用預設之角度陣列對應換算為炎角01 Obsen^AC一S 及 01 〇b_ed AC E,而夾角 01 ^及 w j e 之差異即為第一觀測夾角01〇bserved Ac 0 根據計算第一預期夾角0lExpected_AC中第一觀測夾角μ 13 201207702 涵蓋之程度產生一第一涵蓋比例P1_AC,以得到觸控輸入點01、 02位於候選區域AC之機率。對應候選區域AC之第一預期夾角 θ 1 Expected_AC中第一觀測夾角01 〇bseryed AC之第一涵蓋比例p 1 AC可由 下列公式計算而得:^ ^ r 1 〇bserved_AC is converted to the first observed inflammation angle 01〇bserved AC. For example, the first observed pixel length PI Observed_AC on the first mirror image capturing unit 22', the pixel starting position pi Observed_AC_S, P1 〇bserved_AC_E can be converted into the flaming angle 01 Obsen^AC-S by using the preset angle array correspondingly 01 〇b_ed AC E, and the difference between the angles 01 ^ and wje is the first observation angle 01〇bserved Ac 0 to generate a first coverage ratio P1_AC according to the degree of coverage of the first observation angle μ 13 201207702 in the first expected angle 0lExpected_AC To obtain the probability that the touch input points 01, 02 are located in the candidate area AC. Corresponding to the first expected angle of the candidate area AC θ 1 The first observation angle 01 in the Expected_AC 01 〇bseryed AC The first coverage ratio p 1 AC can be calculated by the following formula:

P1_AC - 01Observed AC / 0iExpected AC 之後’以相同方式分別計算其餘候選區域之第一預期夾角及第一 觀測夾角,如第2圖之候選區域BD、BC、八〇之第一預期夾角 WEXpected_BD、01ExpeCted_BC 及 01Expected AD,以及第一觀測夾角 _BD、Θ1 0bserved BC 及 01 〇bserved AD。根據計算各候選區域 BD、BC、 AD之第-預期爽角中第一觀測夾角涵蓋之程度,以產生各候選區 域BD、BC、AD對應觸控輸入點之比例P1—BD、P1—Bc及ρι 。 舉例來說,對應候選區域BD之第一涵蓋比例ρι_Β〇 = ⑽/ 01一—BD。具體而言’ P1_AC、P1_BD、P1—BC 及 P1-AD 越 第-涵蓋_越大),該候選區域為觸控輸人點所在之位置的機率就 越大。值得注意,於本實施财,涵蓋比例係第—預期夾角盘第一 Γ核角之_ ’其他例如:第—_像錄度與第—觀測像辛長 ===及待侧區域與待伽m域中朗影像之幾何關係比 ,、成上述目的-另外’本發明以兩觸控輸入點01 ::=輸::控輸— 亦可利用$ -鏡像攝像單元 於本發明之另一實施例中,步驟508 201207702 22’及第二鏡像攝像單元24,來計算一候選區域之涵蓋比例。請參考 第7圖,第7圖係為說明於計算第2圖中之一候選區域八匚被第— 及第二鏡像攝像單元所擷取之影像所涵蓋之比例之示意圖。利用第 一鏡像攝像單元22’產生候選區域AC之第一涵蓋比例pi_AC之方 式相似於第5圖,於此不贅述。利用第二鏡像攝像單元%,產生候 選區域AC之第二涵蓋_1>2_从之原理相似於利用第一鏡像攝像 單元22’產生候選區域AC之第—涵蓋比Ac之方式。根據第 二鏡像攝像單元24’之位置以及候選區域^之兩端點,利用三角函 式如arctangent產生一第二預期夾角们及其起始角度位 置’再利用角度陣列(angletable)可得一第二預期像素長度p2 Expected—AC及其起始像素位置;於一較佳實施例中,候選區域之 兩端點相同於產生第二預期夾角〜— Μ之候選區域AC實質上 左右兩端點(x2,y2)及(X4,y4)。接著,第二鏡像攝像單元24,於第二 預期像素長度P2 Expected_AC範圍内觀測是否有觸控輸入點影像產 生,於本實施例為第二觀測像素長度P2 〇bs㈣d a 陣列,可料二觀尊素纽P2qw_觀 。根據計算第二預期夾角一ac中第二觀測夹角 们Observ吨AC涵蓋之程度產生第二涵蓋比例p2—Ac。第二涵蓋比例 p2—AC可由下列公式計算而得:After P1_AC - 01Observed AC / 0iExpected AC, the first expected angle and the first observation angle of the remaining candidate regions are respectively calculated in the same way, as the candidate regions BD, BC, and the first expected angle of the gossip WEXpected_BD, 01ExpeCted_BC and 01Expected AD, and the first observation angle _BD, Θ1 0bserved BC, and 01 〇bserved AD. Calculating the extent of the first observation angle included in the first-expected refresh angle of each candidate area BD, BC, and AD, to generate ratios P1-BD, P1-Bc of the touch input points of each candidate area BD, BC, and AD Ρι. For example, the first coverage ratio ρι_Β〇 = (10) / 01 - BD of the corresponding candidate area BD. Specifically, 'P1_AC, P1_BD, P1_BC, and P1-AD are greater - the coverage _ is larger), and the probability that the candidate area is the location where the touch input point is located is larger. It is worth noting that in this implementation, the ratio is the first-expected angle of the first nucleus angle of the angle _ 'others such as: the first - _ image degree and the first - observation image 辛 length = = = and the side area to be gamma The geometric relationship ratio of the lang image in the m domain, for the above purpose - additionally 'the invention uses two touch input points 01 ::= input:: control input - can also use the $ - mirror camera unit in another implementation of the invention In the example, step 508 201207702 22' and the second mirror image capturing unit 24 are used to calculate the coverage ratio of a candidate region. Please refer to FIG. 7. FIG. 7 is a schematic diagram for explaining the ratio of the image captured by the first and second mirror image capturing units in one candidate region in FIG. The manner in which the first coverage ratio pi_AC of the candidate region AC is generated by the first mirror image capturing unit 22' is similar to that of FIG. 5, and will not be described herein. With the second mirror image pickup unit %, the second coverage of the candidate area AC is generated as follows. The principle of the second coverage unit AC is similar to the manner in which the first image pickup unit 22' is used to generate the first coverage ratio Ac of the candidate area AC. According to the position of the second mirror image capturing unit 24' and the two end points of the candidate region ^, using a trigonometric function such as arctangent to generate a second expected angle and its starting angular position 'reuse angle array (angletable) can obtain a first The expected pixel length p2 Expected_AC and its starting pixel position; in a preferred embodiment, the two end points of the candidate region are the same as the second expected end angle of the candidate region AC which is the second expected angle. X2, y2) and (X4, y4). Next, the second image capturing unit 24 observes whether the touch input point image is generated in the range of the second expected pixel length P2 Expected_AC. In this embodiment, the second observed pixel length P2 〇bs(4)d a array can be expected. Prime New Zealand P2qw_ view. The second coverage ratio p2-Ac is generated based on the degree to which the second expected angle ac in the second observation angle ac of the Observ ton AC is calculated. The second coverage ratio p2—AC can be calculated by the following formula:

P2_AC = 02〇bservedAC / 02Expected_AC 取得第-、第二涵蓋比例P1_AC、p2_AC後,可根據候選區域 AC分別與第一、第二鏡像攝像單元22,、24,之間的距離m、D2 201207702 來計算第一涵蓋比例P1_AC之權重W1及第二涵蓋比例pi—ac之 權重W2。權重wi及W2可由下列公式計算而得: wi =D1 /(Dl +D2) W2 = D2 / (D1 + D2) 根據第一、第二涵蓋比例pl—AC、P2_AC及對應之權重 W2,可產生候選區域ACi整體涵蓋比例p如下:P2_AC = 02〇bservedAC / 02Expected_AC After obtaining the first and second coverage ratios P1_AC and p2_AC, the distance between the candidate region AC and the first and second mirror imaging units 22, 24, respectively, can be calculated according to the distance m, D2 201207702. The first coverage ratio P1_AC has a weight W1 and a second coverage ratio pi-ac has a weight W2. The weights wi and W2 can be calculated by the following formula: wi = D1 / (Dl + D2) W2 = D2 / (D1 + D2) According to the first and second coverage ratios pl-AC, P2_AC and the corresponding weight W2, can be generated The candidate area ACi overall coverage ratio p is as follows:

P = Wl * P1_AC + W2 *P2_AC 之後,以_方式分別計算其餘各候賴域之㈣涵蓋比例。 當各候選區域皆具有一涵蓋比例(如第一涵蓋比例或整體涵蓋Η 例)後,便可藉由各候選區域之涵蓋比例來猶候選區域,以判峨 控輸01 02貫際對應之候選區域。舉例來說,在步驟训中 可直接刪除具有最小涵蓋_讀舰域,輯取具有其餘較衫 蓋比例之候顧域為觸控輸人點之位置。然而 \ 涵蓋比例之候選區域仍有誤判機率。因此,步_可== 證程序以提升觸糖輸人驗置之正礙±。 八 於本發明之另一實施例中’步驟51〇在判斷複數個觸控輸 示對應之候選區域時包含—致性驗證… 在:除涵纽姆_錢區_,雜斷 包3構成全健輕紅錄_,.料蓋關麵_選區域之 201207702 外其餘候舰域之聯翻子仍包含構成全部㈣區域之影像區間, 則判定該涵蓋_最低之候親域並辆控輸人點之位置,可進行 刪除。請參考第8圖,第8圖係為說明利用對應第—攝像單元22 及第二攝像單元24之影像區間來進行一致性驗證之示意圖。如第8 圖所不,實際觸控點位於AC、BD,經上述公式計算候選區域Ac、 肋、BC及AD分別具有85%、65%、1〇%、州之涵蓋比例而候 選區域AC、BD、BC及AD之聯集因子為第一影像區間A、B及第 二影像區間C、D。舉例來說’候選區域AC之聯集因子包含第一影 像區間A及第二影像區間c、候選區域肋之聯集因子包含第一影 像區間B及第二影像區間d、候選區域bc之聯集gj子包含第一景》 像區間B及第二影像區間c、候選區域ad之聯集因子包含第一影 像區間A及第二影像區間D。由於候選區域AD具有最低之涵蓋比 例即候選區域AD為觸控輸入點之機率較小,因此刪除候選區域ad 為觸控輸入點可能所在之位置。在刪除候選區域AD時,其餘候選 區域AC、BD、BC之聯集因子包含第—影像區間A、B及第二影像 區間C、D,因此判定候選區域AD並非觸控輸入點之位置,故可 删除候選區域AD。在剩餘未驗證之候選區域AC、BD、BC中,候 選區域BC具有最低之涵蓋比例。在刪除候選區域BC時,其餘候 選區域AC、BD之聯集因子包含第一影像區間A、B及第二影像區 間C、D,因此判定候選區域BC並_控輸人點之位置,故可刪除 候選區域BC。在剩餘未驗證之候選區域ac、bd中,候選區域BD •具有最低之涵蓋比例。在刪除候選區域BD時,其餘候選區域八〇 之聯集因子僅包含第-影像區間A及第二影像區間c,而並非包涵 17 201207702 構成全部簡d域之第—影賴間A、B及第二影像郎c、d,因 此判定候選區_D為觸控輸入點之位置,故不可刪除。因此,剩 餘未被刪除之候選區域AC、BD即判定為觸控輸八點之位置。 然而,僅_第―攝鮮元22及第二單元24來進行一致性 驗證仍有可能誤觸控輸福之位置。請參考第9圖,第9圖係為 =卿嶋單元22及第二攝料元24她卜致性驗證 決判時之不意圖。複數侧控輸人點實際上分別位於候選區域Μ、 肋、BC,而候選區域AC、BD、BC及AD分別具有_、挪、 81%、40%之涵蓋比例。候選區域AC、BD、bc及如之聯集因子 為第一影像區間A、B及第二影像區間C、D。由於候選區域仙具 有最低之涵蓋比例。在刪除候選區域AD時,其餘候選區域Μ、 肋、BC之聯集因子包含全部候選區域之第一影像區間A、B及第 二影像區間C、D’因此判定候選區域AD並非觸控輸入點之位置, 故可刪除候選區域AD。在剩餘未經驗證之候選區域Ac、bd、bc 中’候選區域AC具有最低之涵蓋比例。在刪除候選區域ac時, 其餘候選區域BD、BC之聯集因子僅包含第一影像區間6及第二影 像區間C、D,因此判定候選區域AC為觸控輸入點之位置,故不= 刪除,故將AC保留。在剩餘未經驗證之候選區域 心,候綱3關三彳咖_。纟==BC 時’其餘候選區域AC、BD之聯集因子包含第一影像區間A、B及 第二影像區間C、D’因此判定候選區域BC為觸控輸人點之位置, 故可删除候選區域BC。在刪除候選區域Bc後,剩餘未經驗證之候 18 201207702 候選巴域域AC、BD ’若刪除候選區域BD,其餘未刪除之 域AC之聯集因子僅包含第—影像區間八及第二影像區間 不可刪除候選區域BD。因此,未刪除之候選區域AC、肋 P,】疋為觸控輪人點之位置’與觸控輸人點相位於候親域Μ、 BD、BC之設定條件不符。原因在於實際觸控輪入點%分別斑从、 BD重疊,因此若僅透過第一攝像單元Μ及第二攝像單元Μ來進 灯-致性驗證仍有可能誤期控輸人點之位置。 因此’於本發明之另一實施例中,步驟51〇之一致性驗 利用第取第-攝像單元22及第二攝像單元24,以及;於直鏡 面對植置之第-鏡像攝像單元22,及第二鏡面攝像單元%來進行 一致性驗證。請參考第10圖,第10圖係為說明利用第-攝像單元 22、第二攝像單元24、[鏡賴像單元22,及第二鏡面攝像單元 24來對各候選區域進行—致性驗證之示意圖。如㈣圖所示,利 用第-鏡像攝像單元22’ ’也就是使用第一攝像單元Μ透過鏡面 26,來向指示區域20擷取影像,該影像包含複數個第三影像區間匕 F、G ;姻第二鏡像攝像單元24, ’也就是制第二攝像單元22透 過鏡面26,來向指示區域糊取影像,該影像包含複數個第四影 像區間I小K。每—候選區域係由第—影像區間a、B中之一、 第二影像區間C、D中之—、第三影像區間E、F、G中之一與第四 影像區間I、J、K中之一聯集產生。舉例來說,候選區域娜之 聯集因子包含第-影像區間A、第二影像區間c、第三影像 及第四影像區間I ;候選區域腳】之聯集因子包含第一影像區間 19 201207702 B、第二影像區間C、第三影像區間F及第四影像區間】,其餘候選 區域以此類推。也就是說,候選區域Acm、BCFJ、bdGK、ADGI 之聯集因子為第一影像區間A、B、第二影像區間c、D、第三影像 區間E、F、G及第四影像區間j、j、Κβ假設候選區域ACm、BCFJ、 BDGK、ADGI分別具有80%、81%、82%、4〇%之涵蓋比例,而觸 控輸入點實際上分別位於候選區域ACEI、Bcfj、BDGK。 由於候選區域ADGI具有最低之涵蓋比例,因此刪除候選區域 ADGI為觸控輸入點可能所在之位置。在刪除候選區域adgi時, 其餘候選區域ACEI、BCFJ、BDGK之聯集因子包含所有第一影像 區間A、B、第二影像區間C、D、第三影像區間e、f、g及第四 影像區間I、】、K,因此判定候親域adGI並非觸控輸入點之位 置,故可刪除候選區域adgi。在剩餘未驗證之候選區域ACEI、 CTJ、BDGK 候選區i或ACEI具有最低之涵蓋比例。在刪除候 選區域ACEI時,其餘候選區域BCFJ、BDGK之聯集因子僅包含第 -影像區間B、第二影像區間C、D、第三影像區間F、G及第四影 像區間j、κ,因此判定候選區域ACEI為觸控輸入點之位置,故不 可刪除。 在剩餘未驗證之候選區域BCFJ、BDGK中,候選區域bcfj具 有最低之涵蓋比例。在刪除候選區域BCFJ時,其餘候選區域Am、 BDGK之卿因子僅包含第—影像區間A、B、第二影像區間c、D、 第三影像區間E、G及第四影像區間!、κ,因此顺候選區域bcfj 201207702 為觸控輸入點之位置,故不可刪除。 在剩餘未驗證之候選區域BDGK中,若删除候選區域bdgk, 其餘未刪社候舰域ACEI、BCF;之難时觀的—與像區 間A、B、第二影像區間C、第三影像區間E、F及第四影像區H 故不可刪除候選區域BDGK。因此,未刪除之候選區域ACEi、 BCFJ、BDGK即判定為觸控輸入點之位置,符合觸控輸入點分別位 於候選區域ACEI、BCFJ、BDGK之設定條件。值得需注意,㈣ 圖中亦可採㈣2圖之第—攝像單元22及第二攝像單元μ,以及 位於其鏡面對稱位置之第一鏡像攝像單元22’及第二鏡面攝像單元 24’其中之一進行一致性驗證,驗證方式同上,不再賛述。 -請參考第11圖,$ u _為·本發明之_第—鏡像攝像單 7L 22’或第二鏡像攝像單元M,判斷光學觸控榮幕2之指示區域加 實際__位置之方法u之流程圖。第u圖之方 圖所示之光學觸控錄2綱,其轉詳述如下:’、曰 步驟酬.使用第—攝像單元22向指示區域2〇擷取影像以產生第 一實點影像II ; 步驟贈·使料:攝像單元24向指示區域㈣取影像以產生第 —貫點影像12 ; 步驟蘭.根據第—實點影像11與第二實點影像12,產生複數個 候選區域; 21 201207702 步驟應:利用第-鏡像攝像單元22,或第二鏡像攝像單元Μ,向 «彡«生虛點影像⑴虛 點影像Gl,第一虛點影像G1或第二虛點影像G2各包 含複數個貫際觸控輸入點及虛解觸控輸入點. 步驟:根據複數個候選區域+至少—區域產生—觸控輸入點 可能分佈區域; 步驟1110 :根據觸控輸入點可能分佈區域對應第一鏡 22,或第二鏡像攝像單元24,產生—第—重建影像幻或 一第二重建影像R2 ; 步驟1112 :第一實點影像II與第一重建影像幻產生一第一虛點重 建影像RH,或著第二實點影像12與第二重建影像似 產生一第二虛點重建影像见2 ; 步驟m4 :比較第-虛點影像G1與第一虛點重建影像如之相似 度’或比較第二虛點影像G2與第二虛點重建影像奶 之相似度’以判斷該觸控輸入點可能分佈區域是否對應 於一觸控輸入點。 " 步驟1108中,觸控輸入點可能分佈區域可根據第一及/或第二實 點影像決定。根據第-實點影像之觸控輸人點影像數目及第二實點 衫像之觸控輸入點影像數目來產生觸控輸入點可能分佈區域數目。 响參考第I2圖。第I2圖係為說明觸控輸人點可能分佈區域之示意 圖。如第12圖所示,當第一實點影像u及/或第二實點影像12所包 含之觸控輪人轉像數目為2時,可判_控輸人狀數目為2或 22 201207702 3。如第12圖之情況casel所示,當觸控輸入點之數目為2時,觸 控輸入點可能分佈區域數目為2,也就是待測物件可能位於候選區 域AC及BD或候選區域BC及AD。更明確的說,當觸控輸入點之 數目為2時’觸控輸人點可能分佈區域為候選區域ac及bc、候選 區域BC及BD、候選區域BD及AD、候選區域沾及ac、候選 區域AC及BD或候選區域BC及AD等組合。然而,由於觸控輸 入點所在之候魏域㈣應第—實點f彡像上之觸讀人點影像之第 -影像區間A、B,以及第二實點影像上之觸控輸入點影像之第二 影像區間C、D,因此’觸控輸入點可能分佈區域為候選區域ac 及BD或候選區域BC及AD。其它候選區域並無對應所有之第一影 像區間A、B及第二影像關c、D,例如若觸控輸人點可能分佈區 域為候選區域BC及BD時’其並無對應第一影像區間A。 需注意’當第-實點影像n及/或第二實點影像12所包含之觸控 輸入點影像數目為2時(第12圖),觸控輸人點之數目為妙叫或 3(case2) ’因此’系統會計算casel與case2中虛點影像與虛點重建 影像相似度(計算方式後面詳述),以判斷實際觸控點數目。 如果要精準地確認實際觸控點數目,以產生觸控輸入點可能分佈 區域來減少系統計算量,可藉由第—虛點影像⑴及第二虛點影像 G2係利用第-鏡像攝像單元22’及第二鏡像攝像單元%向指示區 域20擷取豪j像而產生,第一虛點影像〇1及第二虛點影像〇2各包 含複數個實際觸控輸入點及虛解觸控輸入點。因此,於本發明之其 23 201207702 它實施例中,在藉由第—及/或第二實點影像來判斷觸控輸入點可能 分佈區域數目時,亦可同時利用第一虛點影像⑺以或第二虛點影 像G2以更加精準地判斷觸控輸入點可能分饰區域數目。以第—虛 點影像G1為例,當觸控輸入點之數目為3時,第-虛點影像 包含5觸控輸人闕像;其t 3觸控輸人轉像制實際觸控輸入 點(3點)之鏡像,而其餘2觸控輸入點影像對應第一實點影像 之觸控輸入,影像(2點)。根據第一虛點影像⑴之觸控輸入點影像 數目及第-實點影像U之差異即可判斷觸控輸人點可能分佈區域 數目例如上述之第-虛點影像〇1之觸控輸入點影像數目及第—φ 貫點影像II之差異為3 (5 _ 2 = 3),因此觸控輸入點可能分佈 數目即為3。After P = Wl * P1_AC + W2 * P2_AC, the (4) coverage ratios of the remaining candidate domains are calculated separately. When each candidate region has a coverage ratio (such as the first coverage ratio or the overall coverage example), the candidate region can be used as a candidate region to determine the candidate for the corresponding correspondence. region. For example, in the step training, the minimum coverage _ read ship field can be directly deleted, and the candidate field with the remaining ratio of the shirt cover is taken as the position of the touch input point. However, there are still false positives in the candidate area of the coverage ratio. Therefore, step _ can == the certificate procedure to improve the resistance of the sugar input test. In another embodiment of the present invention, in the step 51, when the candidate regions corresponding to the plurality of touch inputs are determined, the authentication is included. In the following: In addition to the NAND _ money area _, the broken package 3 constitutes the whole健轻红录_,.料盖关面_Selected area 201207702 The other continuation of the other waiting ship domain still contains the image interval that constitutes all (4) areas, then it is determined that the coverage _ minimum waiting area and vehicle control input The location of the point can be deleted. Please refer to FIG. 8. FIG. 8 is a schematic diagram for explaining the consistency verification by using the image sections corresponding to the first camera unit 22 and the second camera unit 24. As shown in Figure 8, the actual touch points are located in AC and BD. The candidate areas Ac, ribs, BC, and AD are calculated by the above formula to have 85%, 65%, 1%, and the coverage ratio of the state, and the candidate area AC, The combination factors of BD, BC, and AD are the first image interval A, B and the second image interval C, D. For example, the union factor of the candidate region AC includes the first image interval A and the second image interval c, and the union factor of the candidate region rib includes the first image interval B and the second image interval d, and the candidate region bc is combined. The gj sub-image includes the first scene and the second image interval c, and the candidate region ad includes a first image interval A and a second image interval D. Since the candidate area AD has the lowest coverage ratio, that is, the probability that the candidate area AD is a touch input point is small, the deletion candidate area ad is a possible location of the touch input point. When the candidate region AD is deleted, the union factors of the remaining candidate regions AC, BD, and BC include the first image interval A, B and the second image interval C, D, so that the candidate region AD is not the position of the touch input point, The candidate area AD can be deleted. Among the remaining unverified candidate areas AC, BD, BC, the candidate area BC has the lowest coverage ratio. When the candidate region BC is deleted, the union factors of the remaining candidate regions AC and BD include the first image interval A, B and the second image interval C, D, so that the candidate region BC is determined and the position of the input point is controlled, so The candidate area BC is deleted. Among the remaining unverified candidate regions ac, bd, the candidate region BD • has the lowest coverage ratio. When the candidate region BD is deleted, the co-collection factors of the remaining candidate regions are only including the first image interval A and the second image interval c, and are not included in the 2012-07702 constituting all the simple d-domains - the inter-images A, B and The second image is lang c, d, so the candidate area _D is determined to be the position of the touch input point, so it cannot be deleted. Therefore, the remaining candidate areas AC and BD that have not been deleted are determined to be the positions of the touch input eight points. However, it is still possible to make a mistake in the position of the blessing only by performing the consistency verification of the _th-those element 22 and the second unit 24. Please refer to Figure 9, which is the intention of the =========================================================== The complex side control input points are actually located in the candidate area Μ, rib, BC, respectively, and the candidate areas AC, BD, BC and AD respectively have _, 、, 81%, 40% coverage ratio. The candidate regions AC, BD, bc and the associated factors are the first image interval A, B and the second image interval C, D. Because the candidate area has the lowest coverage ratio. When the candidate region AD is deleted, the co-collection factors of the remaining candidate regions Μ, ribs, and BC include the first image regions A and B and the second image regions C and D′ of all the candidate regions, so that the candidate region AD is not the touch input point. The position of the candidate area AD can be deleted. Among the remaining unverified candidate regions Ac, bd, bc, the candidate region AC has the lowest coverage ratio. When the candidate region ac is deleted, the union factors of the remaining candidate regions BD and BC include only the first image interval 6 and the second image interval C and D. Therefore, the candidate region AC is determined to be the position of the touch input point, so no=delete Therefore, the AC will be retained. In the remaining unverified candidate area, the candidate is 3 彳. When 纟==BC, the union factors of the remaining candidate regions AC and BD include the first image interval A, B and the second image interval C, D′, so that the candidate region BC is determined as the location of the touch input point, so the deletion can be deleted. Candidate area BC. After deleting the candidate region Bc, the remaining unauthenticated time 18 201207702 candidate bar domain AC, BD 'if the candidate region BD is deleted, the remaining unremoved domain AC's union factor only includes the first image interval eight and the second image The candidate region BD cannot be deleted in the interval. Therefore, the unremoved candidate area AC, rib P, 疋 is the position of the touch wheel person point ′ is not in accordance with the setting conditions of the touch input point 候, BD, BC. The reason is that the actual touch wheel entry points are respectively separated from the BD and the BD overlaps. Therefore, if only the first camera unit Μ and the second camera unit 进 are used for the lamp-induced verification, it is possible to misplace the position of the person. Therefore, in another embodiment of the present invention, the consistency check of step 51 is performed by using the first camera unit 22 and the second camera unit 24, and the first mirror image unit 22 facing the implant in the straight mirror. And the second mirror camera unit % for consistency verification. Referring to FIG. 10, FIG. 10 is a diagram illustrating the use of the first camera unit 22, the second camera unit 24, the [mirror image unit 22, and the second mirror unit 24 to perform verification of each candidate region. schematic diagram. As shown in the figure (4), the image is captured by the first image capturing unit 22'', that is, the first image capturing unit Μ is transmitted through the mirror surface 26, and the image includes a plurality of third image intervals 匕F, G; The second mirror image capturing unit 24, 'that is, the second camera unit 22 passes through the mirror surface 26 to paste an image into the indication area, the image including a plurality of fourth image sections I small K. Each candidate region is one of the first image interval a, B, the second image interval C, D, and one of the third image interval E, F, G and the fourth image interval I, J, K One of the unions is produced. For example, the candidate region of the candidate region includes the first image interval A, the second image interval c, the third image, and the fourth image interval I; the candidate region of the candidate region includes the first image interval 19 201207702 B The second image interval C, the third image interval F, and the fourth image interval, and the like, and the like. That is, the association factors of the candidate regions Acm, BCFJ, bdGK, and ADGI are the first image interval A, B, the second image interval c, D, the third image interval E, F, G, and the fourth image interval j, j, Κβ assumes that the candidate regions ACm, BCFJ, BDGK, and ADGI have coverage ratios of 80%, 81%, 82%, and 4%, respectively, and the touch input points are actually located in the candidate regions ACEI, Bcfj, and BDGK, respectively. Since the candidate area ADGI has the lowest coverage ratio, the deletion candidate area ADGI is the location where the touch input point may be located. When the candidate region adgi is deleted, the union factors of the remaining candidate regions ACEI, BCFJ, and BDGK include all the first image intervals A, B, the second image interval C, D, the third image interval e, f, g, and the fourth image. The interval I, ], K, therefore, determines that the candidate domain adGI is not the position of the touch input point, so the candidate region adgi can be deleted. The remaining unverified candidate regions ACEI, CTJ, BDGK candidate region i or ACEI have the lowest coverage ratio. When the candidate region ACEI is deleted, the union factors of the remaining candidate regions BCFJ and BDGK include only the first image interval B, the second image interval C, D, the third image interval F, G, and the fourth image interval j, κ, It is determined that the candidate area ACEI is the position of the touch input point, and therefore cannot be deleted. Among the remaining unverified candidate regions BCFJ, BDGK, the candidate region bcfj has the lowest coverage ratio. When the candidate region BCFJ is deleted, the remaining candidate regions Am and BDGK include only the first image interval A, B, the second image interval c, D, the third image interval E, G, and the fourth image interval! κ, so the candidate area bcfj 201207702 is the location of the touch input point, so it cannot be deleted. In the remaining unverified candidate region BDGK, if the candidate region bdgk is deleted, the remaining undeleted community domain ACEI, BCF; the difficult time view - and the image interval A, B, the second image interval C, the third image interval E, F and the fourth image area H, the candidate area BDGK cannot be deleted. Therefore, the undeleted candidate areas ACEI, BCFJ, and BDGK are determined as the positions of the touch input points, and the touch input points are respectively set in the candidate areas ACEI, BCFJ, and BDGK. It is worth noting that (4) the figure can also be taken as the first (four) 2 - camera unit 22 and the second camera unit μ, and one of the first mirror unit 22' and the second mirror unit 24' at its mirror symmetrical position. Perform consistency verification, the verification method is the same as above, no longer praise. - Please refer to Fig. 11, $ u _ is the _ _ mirror image camera 7L 22' or the second mirror image unit M of the present invention, and the method of determining the indication area of the optical touch glory 2 plus the actual __ position u Flow chart. The optical touch recording 2 shown in the diagram of FIG. u is detailed as follows: ', 曰 step remuneration. Using the first camera unit 22 to capture an image to the indication area 2 to generate a first real image II Step gift: The imaging unit 24 takes an image to the indication area (4) to generate a first-point image 12; step blue. According to the first-real image 11 and the second real image 12, a plurality of candidate regions are generated; 201207702 The procedure should be: using the first-mirror camera unit 22, or the second mirror image unit Μ, to the «彡«-virtual point image (1) virtual point image G1, the first virtual point image G1 or the second virtual point image G2 each containing a plurality a cross-touch input point and a virtual touch input point. Step: According to a plurality of candidate regions + at least - region generation - a touch distribution input point possible distribution area; Step 1110: According to the touch input point, the possible distribution area corresponds to the first The mirror 22, or the second mirror image capturing unit 24, generates a -first reconstructed image illusion or a second reconstructed image R2; Step 1112: the first real image II and the first reconstructed image phantom generate a first virtual point reconstructed image RH Or the second real image 12 and Reconstruct the image to generate a second virtual point to reconstruct the image as shown in Fig. 2; Step m4: Compare the similarity between the first virtual point image G1 and the first virtual point reconstructed image' or compare the second virtual point image G2 with the second virtual point reconstruction The similarity of the image milk is used to determine whether the possible distribution area of the touch input point corresponds to a touch input point. " In step 1108, the possible distribution area of the touch input point may be determined according to the first and/or second real image. The number of possible distribution areas of the touch input point is generated according to the number of touch input points of the first real image and the number of touch input point images of the second real image. Refer to Figure I2. Figure I2 is a schematic diagram illustrating the possible distribution of touch input points. As shown in FIG. 12, when the number of touch wheel human rotation images included in the first real image u and/or the second real image 12 is 2, the number of control persons can be judged as 2 or 22 201207702 3. As shown in case 12 of FIG. 12, when the number of touch input points is 2, the number of possible touch distribution points of the touch input point is 2, that is, the object to be tested may be located in the candidate area AC and BD or candidate areas BC and AD. . More specifically, when the number of touch input points is 2, the touch distribution points may be the candidate regions ac and bc, the candidate regions BC and BD, the candidate regions BD and AD, and the candidate regions may be ac, candidates. The area AC and the BD or the candidate areas BC and AD are combined. However, since the touch input point is located in the Wei domain (4), the first image area A, B of the touch point image on the first real point f image, and the touch input point image on the second real point image The second image interval C, D, therefore, the 'touch input point possible distribution area is the candidate area ac and the BD or the candidate areas BC and AD. The other candidate regions do not correspond to all of the first image segments A and B and the second image regions c and D. For example, if the touch input points may be the candidate regions BC and BD, there is no corresponding first image interval. A. It should be noted that when the number of touch input point images included in the first real image n and/or the second real image 12 is 2 (Fig. 12), the number of touch input points is a magic call or 3 ( Case2) 'Therefore' the system will calculate the similarity between the virtual point image and the virtual point reconstructed image in casel and case2 (calculated in detail later) to determine the actual number of touch points. If the number of actual touch points is to be accurately determined to generate a possible distribution area of the touch input points to reduce the amount of system calculation, the first-mirror image capturing unit 22 can be utilized by the first virtual image (1) and the second virtual image G2. And the second image capturing unit % is generated by capturing the image of the hao j, and the first virtual image 〇 1 and the second virtual image 〇 2 each comprise a plurality of actual touch input points and a virtual touch input. point. Therefore, in the embodiment of the present invention, in the embodiment of the present invention, when the number of possible distribution areas of the touch input point is determined by the first and/or second real point images, the first virtual point image (7) may be simultaneously used. Or the second virtual point image G2 to more accurately determine the number of possible division areas of the touch input point. Taking the first virtual point image G1 as an example, when the number of touch input points is 3, the first virtual point image contains 5 touch input images; and the t 3 touch input image is the actual touch input point. (3 points) mirroring, and the remaining 2 touch input point images correspond to the touch input of the first real image, image (2 points). According to the difference between the number of touch input point images and the first real point image U of the first virtual point image (1), the number of possible distribution areas of the touch input point can be determined, for example, the touch input point of the first virtual point image 〇1 The difference between the number of images and the φ-point image II is 3 (5 _ 2 = 3), so the number of possible touch input points is 3.

在判斷出觸控輸福可能分舰域後,在步驟_中便 如第-鏡像攝像單元22,或第二鏡像攝像單元24,上產生一重建景 像’以對應第-虛點影像⑴或第二虛點影像㈤中之虛解觸純 點。承上述,觸控輸入點可能分佈區域為候選區域ac及B 選區域BC及AD。以候選區域AC_為例,根據候選區^ 及中每-候選區域之四個端點的座標,產生該候選區域 内切圓的半徑及圓心座標,㈣圓在模擬手指對應候選區域魅 狀況,但躲意内糊只是-雜财式,村能_其他料 擬。請參考第η圖,第η圖係為說明於—候選區域从内 内:圓之示意圖。候選區域AC内切圓Cr之圓心為候選區域 之中心點(xe’ye),第4圖及其相關說明已經說明其亦算方式,^ ⑧ 24 201207702 =在贅述’但應注意中心點㈣⑽亦可能採用其他方式產生,例如: 二 Φ°α之半徑尺可由下列公式計算而得: R = (dl+d2 + d3 + d4)/4 其中,cU,,、私分別為由候選區域^之四邊緣垂直延 至候選區域AC之中心點(xc,yc)之距離。如此,以候選區域ac 之中;心點(XC,yC)為中心搭配半歓,可得到候選區域AC之内切圓 Cr二月:主思候選區域Ac之内切圓&之圓周可能不會完全服貼於候 k區域AC之邊緣。取得候選區域Ac之内切圓&便可將計算出内 切圓Cr投射於第—鏡像攝像單元22,或第二鏡像攝像單元%上之 位置,也就是第-鏡像攝像單元22,或第二鏡像攝像單元%之觸控 輸入點影像位置。 請參考第Μ圖,第Η圖係為說明計算候選區域AC之内切圓 Cr對應於第-鏡像攝像單元22,之像素位置之示意圖。如帛14圖所 •示,以一直線匕聯接第一鏡像攝像單元22,及候選區域Ac之内切 圓Cr之中心點。内切圓Cr之中心點往垂直於直線[之方向各自延 伸半佐R之位置即為内切圓Cr之切點位置Cr_pl、Cr_p2。根據第 一鏡像攝像單元22’之位置與切點位置Cr_pl,經由三角運算可得夾 角0Cr_AC_start ;而根據第一鏡像攝像單元22 ’之位置與切點位置 Cr_p2 ’經由二角運真可得夾角0Cr AC end。根據失角0Cr Ac咖及 0Cr_AC_end之差異可產生候選區域AC之相切爽角0cr AC。透過角度陣 • 列(angle table)可得夾角0Cr AC start及6>Cr_AC_end對應於第一鏡像攝像單 201207702 l上之像素起始位置Pei__Ae_start、p。」、㈤,也就是帛—鏡像攝像 單元22’之觸控輸入點影像位置。After determining that the touch loss may be divided into ships, in step _, a reconstructed scene is generated as in the first-mirror camera unit 22 or the second mirror camera unit 24 to correspond to the first-virtual image (1) or The virtual point in the second virtual point image (5) is pure point. In view of the above, the touch input points may be distributed in the candidate regions ac and B select regions BC and AD. Taking the candidate region AC_ as an example, according to the coordinates of the four endpoints of the candidate region and each of the candidate regions, the radius and the center coordinate of the inscribed circle in the candidate region are generated, and (4) the circle is in the simulated finger corresponding candidate region. But hiding inside the paste is just - miscellaneous, the village can _ other materials. Please refer to the ηth diagram, which is a schematic diagram illustrating the candidate region from the inside: the circle. The center of the candidate region AC inscribed circle Cr is the center point (xe'ye) of the candidate region. Figure 4 and its related descriptions have already explained the way it is calculated. ^ 8 24 201207702 = in the description 'but should pay attention to the center point (4) (10) It may be generated in other ways, for example: The radius of two Φ°α can be calculated by the following formula: R = (dl+d2 + d3 + d4)/4 where cU,, and private are respectively the candidate region ^4 The edge extends vertically to the distance of the center point (xc, yc) of the candidate area AC. Thus, in the candidate region ac; the heart point (XC, yC) is centered with a half-turn, and the inscribed circle Cr of the candidate region AC is obtained: February: the circumference of the inner circle of the candidate region Ac may not be Will be fully compliant with the edge of the AC area AC. Obtaining the inscribed circle & of the candidate region Ac, the calculated inscribed circle Cr can be projected on the first mirror image unit 22 or the second mirror image unit %, that is, the first mirror image unit 22, or Two mirror image unit % of the touch input point image position. Please refer to the second drawing, which is a schematic diagram illustrating the pixel position of the first-mirror imaging unit 22 corresponding to the inscribed circle Cr of the calculation candidate region AC. As shown in Fig. 14, the first mirror image unit 22 is connected in a straight line, and the center point of the inner circle Cr of the candidate area Ac is connected. The center point of the inscribed circle Cr is perpendicular to the straight line [the direction in which the direction extends to the half R is the tangent point position Cr_pl, Cr_p2 of the inscribed circle Cr. According to the position of the first mirror image capturing unit 22' and the tangent point position Cr_pl, the angle 0Cr_AC_start can be obtained through the trigonometric operation; and the angle of the first mirror image capturing unit 22' and the tangent point position Cr_p2' can be obtained via the two corners. . According to the difference between the declination 0Cr Acca and 0Cr_AC_end, the tangent angle 0cr AC of the candidate region AC can be generated. The angular angle 0Cr AC start and 6>Cr_AC_end are obtained through the angle matrix • the pixel starting position Pei__Ae_start, p on the first mirror camera 201207702 l. And (5), that is, the position of the touch input point image of the mirror image capturing unit 22'.

重複上述步驟來計算候選區域即之爽角&BD__、^ W 透過角度陣列(angle _e)可得對應纽於感測器上像素起始位置Repeat the above steps to calculate the candidate area, that is, the refresh angle & BD__, ^ W through the angle array (angle _e) can be obtained corresponding to the pixel start position on the sensor

Cr_BD_start ' Pcr_BD_end 產生第-f建影像IU。同理,對其餘驗輸人點可能分佈區域重複 第13圖及第14圖之步驟可得其餘觸控輸人點可能分佈區域 影像。 •AC end PCr_BD_start、PCr_BD_end。因此,當觸控輸入點可能分佈區域 域AC及BD時,根據ρ^_、ρ(> ' 'Cr_BD_start ' Pcr_BD_end generates the first-f construction image IU. In the same way, repeating the steps of the remaining regions of the test input points in the 13th and 14th steps, the remaining touch input points may be distributed. • AC end PCr_BD_start, PCr_BD_end. Therefore, when the touch input point may distribute the area AC and BD, according to ρ^_, ρ(> ' '

計算候選_ AC之_ ® 對應於第二鏡像攝像單元24,之像 素位置之方式她於第—鏡像攝料元22,。舉例纽,在第Μ圖 中,可以一直線L聯接第二鏡像攝像單元24,及候選區域Ac之内 切圓Cr之中心I内_ Cr之中心點往垂直於直線[之方向各自 延伸半徑R之位置即為内切圓Cr之切點位置。第二鏡像攝像單元籲 24’之位置與切點位置已知,再經由三角運算可得角度起始位置,。 透過角度陣列(angle table)可得對應角度於感測器上像素起始位置。 根據候選區域AC、BD之内切圓對應第二鏡像攝像單元%上像素 起始位置產生第二重建影像R2。 於步驟1112中’將第-實點影像π分別與觸控輸入點可能分佈 區域(例如候選區域AC及BD或候選區域BC及AD)之第一重建影 . 26 201207702 像R1相加可產生對應該觸控輸入點可能分佈區域之一第一虛點重 建影像RI1,如第13圖所示。同理,將第二實點影像口分別與觸 控輸入點可能分佈區域(例如候選區域AC及BD或候選區域bc及 AD)之第二重建影像R2相加可產生對應該觸控輸入點可能分佈區 域之一第二虛點重建影像R22。 於;^驟1114中,比較第一虛點影像G1與第一虛點重建影像 之相似度,以判斷對應該第一虛點重建影像见丨之觸控輸入點可能 刀佈區域疋否對應於一觸控輸入點。比較第一虛點影像Ο〗與第一 虛點重建影像RI1之相似度係比較第一虛點影像⑴與第一虛點重 建影像RI1在對應之第一鏡像攝像單元Μ,上之像素位置的重叠程 度明參考第丨5圖,第15圖係為說明比較第一虛點影像⑺與第 一虛點重建影像RI1之相似度之示意圖。第一虛點影像⑴與第一 虛點重建影像RI1之相似程度S1可由下列公式計算而得: SI =〇vl /(〇vl +N1) '、中〇Vl為第一虛點影像G1與第一虛點重建影像RI1重疊部 刀,N1為第—虛點影像(31與第一虛點重建影像圯丨非重疊部分。 如第15圖所示’第一虛點影像G1於第-鏡像攝像單元22,上之像 素位置為像素5〜11、像素15〜19、像素21〜25及像素31〜34,而第 -虛點重建影像RI1於第-鏡像攝像單元泣上之像素位置為像素 6:10、像素13〜17、像素21〜23及像素28〜32。第一虛點影像與 第一虛點重建影像RH重疊部分〇ν1為(4 + 2 + 2+ 〇 = 9,而第一 27 201207702 虛點影像G1與第—虛點重建影像RI1非重疊部分N1為(1 + 1+2+· 2 + 2 + 3+ 2)== 13,因此相似程度 SI = 9/(9+13)==9/22。如此, 便可計算所有觸控輸入點可能分佈區域之第-虛點重建影像ΚΠ與 第一虛點影像G1之相似程度。與第-虛點影像G1相似程度較高之 觸控輸入點可能分佈區域對應之候選區域即判斷為觸控輸入點之位 置。於本發明之另一實施例_,第-虛點影像G1與第一虛點重建 影像RI1非重疊時亦可以模糊相似處理。 同樣地,步驟1114亦可僅比較第二虛點影像G2與第二虛點重 參 建影像RI2 ’以判斷對應該第二虛點重建影像肥之觸控輸入點可 能分佈區域是否對應於一觸控輸入點。比較第二虛點影像⑺與第 -虛點重建影像RI2之相似度係比較第二虛點影像G2與第二虛點 重建影像RI2在對應之第二鏡像攝像單元災上之像素位置的重疊 程度。第二虛點影像G2與第二虛點重建影像則2之相似程产^The calculation candidate _ AC _ ® corresponds to the second mirror image unit 24, and the pixel position is in the manner of the first mirror image element 22. For example, in the second diagram, the second mirror image unit 24 may be connected by a straight line L, and the center point of the inscribed circle Cr of the candidate region Ac is _Cr center point is perpendicular to the straight line [the direction of each extending radius R The position is the tangent point of the inscribed circle Cr. The position of the second mirror camera unit 24' and the position of the tangent point are known, and the angle start position is obtained by trigonometry. A corresponding angle to the pixel starting position on the sensor can be obtained through an angle table. The second reconstructed image R2 is generated according to the inscribed circle of the candidate regions AC and BD corresponding to the pixel starting position on the second mirror image capturing unit %. In step 1112, the first reconstructed image of the first real-point image π and the possible touch distribution area (for example, the candidate region AC and the BD or the candidate regions BC and AD) is respectively added. 26 201207702 Adding R1 may generate a pair The image RI1 should be reconstructed from the first virtual point of one of the possible distribution areas of the touch input point, as shown in FIG. Similarly, adding the second real image port to the second reconstructed image R2 of the possible touch distribution area (for example, the candidate area AC and the BD or the candidate areas bc and AD) may generate a corresponding touch input point. The second virtual point of one of the distribution regions reconstructs the image R22. In step 1114, comparing the similarity between the first virtual point image G1 and the first virtual point reconstructed image to determine whether the touch input point corresponding to the first virtual point reconstructed image may be corresponding to the knife cloth area A touch input point. Comparing the similarity between the first virtual point image Ο and the first virtual point reconstructed image RI1 is compared with the first virtual point image (1) and the first virtual point reconstructed image RI1 at the corresponding pixel position of the first mirror unit For the degree of overlap, refer to FIG. 5, and FIG. 15 is a schematic diagram for comparing the similarity between the first virtual point image (7) and the first virtual point reconstructed image RI1. The degree of similarity S1 between the first virtual point image (1) and the first virtual point reconstructed image RI1 can be calculated by the following formula: SI = 〇vl / (〇vl + N1) ', the middle 〇 Vl is the first virtual point image G1 and the first A virtual point reconstruction image RI1 overlap knife, N1 is the first virtual point image (31 and the first virtual point reconstructed image 圯丨 non-overlapping part. As shown in Fig. 15 'the first virtual point image G1 in the first-mirror image In the unit 22, the pixel positions on the pixel are the pixels 5 to 11, the pixels 15 to 19, the pixels 21 to 25, and the pixels 31 to 34, and the pixel position of the first-virtual image reconstructed image RI1 on the first-mirror imaging unit is the pixel 6 : 10, pixels 13 to 17, pixels 21 to 23, and pixels 28 to 32. The overlapping portion 〇ν1 of the first virtual point image and the first virtual point reconstructed image RH is (4 + 2 + 2+ 〇 = 9, and the first 27 201207702 The virtual point image G1 and the first-virtual point reconstruction image RI1 non-overlapping part N1 is (1 + 1+2+· 2 + 2 + 3+ 2)== 13, so the degree of similarity SI = 9/(9+13 )==9/22. In this way, the degree of similarity between the imaginary-dot reconstructed image of all possible touch input points and the first virtual point image G1 can be calculated. Compared with the first-virtual image G1 The candidate area corresponding to the possible distribution area of the high touch input point is determined as the position of the touch input point. In another embodiment of the present invention, the first virtual point image G1 and the first virtual point reconstructed image RI1 do not overlap. Similarly, the step 1114 may also compare only the second virtual point image G2 and the second virtual point re-construction image RI2 ' to determine the possible distribution of the touch input points corresponding to the second virtual point reconstruction image fertilizer. Whether the area corresponds to a touch input point. Comparing the similarity between the second virtual point image (7) and the first virtual point reconstructed image RI2 is compared with the second virtual point image G2 and the second virtual point reconstructed image RI2 in the corresponding second image The degree of overlap of the pixel positions on the camera unit. The second virtual point image G2 and the second virtual point reconstructed image are similar to the production process.

由下列公式計算而得: X S2 = Ov2 / (〇v2 + N2) 其中,⑽為第二虛點影像G2與第二虛點重建影像奶番仙Calculated by the following formula: X S2 = Ov2 / (〇v2 + N2) where (10) is the second virtual point image G2 and the second virtual point reconstructed image milk Panxian

㈣,mr异而雛輸人點可能分佈區域之第二虛點重 重建影像 。第一虛點影像G2相似程度 之候選區域即酬為觸控輸入 幻2與第·一虛點影像G2之相似程度。也第_(4) The mr is different from the younger and the second point of the distribution area may be reconstructed. The candidate area of the first virtual point image G2 similarity is the similarity of the touch input illusion 2 and the first imaginary point image G2. Also _

28 201207702 本發明亦可同時利用第一虛點影像G1與 之相似程度S卜以及第二虛點影像g2與第^^^像如 :目似程扣,崎算細雖入賴__二:Ttr 。一難輸人點可_區域之細目似財 = 入點可能分佈區域之相似程度S1及S2計算而得。—騎觸控輸 能分佈區域之整體相似程度S可由下列公式計算而p觸控輸入點可 S = (Sl + S2)/2 传’ •藉由1較各觸控輸入點可能分佈區域之整體相似程度S,相似程 ==控輸入點可能分佈區域對應之候選區物 :综上所述’本發明之方法之一實施例利用第一攝像單元及第二攝 像早7C擷取之影像產生複數個候選區域,在利用位於第一攝像單元 鏡像位置之第-鏡像攝像單如及位於第二攝像單元鏡像位置之第 _二鏡賴料元嫌雜,絲據各候賴域被賴取f彡像所涵蓋 之,例纟決疋觸控輸入點位於每一個候選區域之機率。待測區域 被第-鏡像攝像單元及/或第二鏡像攝像單元所擷取之影像之涵蓋 比例越大,該候選區域為觸控輸入點所在之位置的機率就越大本發 明之方法之-實施例利用第一攝像單元及第二攝像單元擷取之影像 產生觸控輸人點可4分佈區域,各觸錄人點可齡佈區域計算一 對應之I,占重建衫像’藉由將各觸控輸入點可能分佈區域益點重建 衫像與第-鏡像攝像單元/第二鏡賴像單元娜雜概,結果具 29 201207702 有相似度較高之觸控輸人點可能分舰域即判斷為觸控輸入點所在 之位置。因此,本發明所提供方法可判斷光學觸控靜實際被多點 觸控的位置,並解決财光學觸控技術在多點觸控情況下導致虛解 觸控點的問題。 以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍 所做之均賴倾修飾,皆闕本發明之涵蓋範圍。 【圖式簡單說明】 第1圖係說明光學式觸控螢幕在多點觸控的情況下,因為多條光路 徑被阻斷所形成的虛解觸控點之示意圖。 第2圖係為本發明之光學觸控螢幕之—實施例之示意圖。 第3圖係為第2圖之光學觸控螢幕之發光模組為回射器時之一實施 例之示意圖。 第4圖係為說明第2圖之處理較根據預設角度_將第-攝像單 =觸控輸人縣像及第二攝像單元之_輸人點影像分別 換算為夾角之示意圖。 第5圖係為朗本㈣之判斷辟觸控螢幕之指示輯實際被觸控 的位置之方法之流程圖。 第6圖係為說明於計算第2圖中之—候選區域被第—鏡像攝像單元 所擷取之影像所涵蓋之比例之示意圖。 第7圖係為說.計算第2圖中之—候選區域被第-及第二鏡像攝 像早疋所操取之影像所涵蓋之比例之示意圖。 201207702 第8圖係為說明利用對應第—攝像單元及第二攝像單元之影像區間 來進行一致性驗證之示意圖。 第9圖係為說明僅利用第—攝像單滅第二攝像單元來進行一致性 驗證誤判時之示意圖。 第1〇 _為說明利用第一攝像單元、第二攝像單元、第-鏡像攝像 單7L及第二鏡面攝像單元來對各候選區域進行一致性驗證 之示意圖。 鲁第11圖,為說明本發明之利用第一鏡像攝像單元或第二鏡像攝像 单元判斷光學觸控螢幕之指示區域實際被觸控的位置之方 法之流程圖。 第12圖係為說明觸控輸入點可能分佈區域之示意圖。 第13圖係為說明於一候選區域内產生一内切圓之示音圖。 第Η圖係為說明計算候選區域之内切圓對應於第—ς象攝像單元 之像素位置之示意圖。 第I5圖係為說明比較第一虛點影像與第一虛點重建影像之相似度 攀 之示意圖。 【主要元件符號說明】 光源 實解觸控點 虛解觸控點 光學觸控螢幕 LG、LG1、LG2 ΟΡ1、ΟΡ2 GP1、GP2 228 201207702 The present invention can also utilize the first virtual point image G1 to be similar to the degree S b and the second virtual point image g2 and the ^^^ image, such as: the like button, the succinct fine is dependent on __ 2: Ttr. A difficult point to lose can be _ regional details like the financial = the degree of similarity of the point of possible distribution area S1 and S2 calculated. - The overall similarity S of the riding energy distribution area can be calculated by the following formula and the p touch input point can be S = (Sl + S2)/2 pass ' • by 1 compared to the total possible distribution area of each touch input point Similarity degree S, similarity path == candidate area corresponding to the possible distribution area of the control input point: In summary, one embodiment of the method of the present invention generates a plurality of images by using the first camera unit and the second camera image captured by the second camera The candidate regions are arbitrarily obtained by using the first-mirror image capturing unit located at the mirroring position of the first camera unit, and the second-stage mirroring material located at the mirroring position of the second image capturing unit. As covered, the example determines the probability that the touch input point is located in each candidate area. The larger the coverage ratio of the image to be detected by the first-mirror image capturing unit and/or the second mirror image capturing unit, the greater the probability that the candidate region is the location where the touch input point is located. In the embodiment, the image captured by the first camera unit and the second camera unit is used to generate a touch distribution point 4, and each touch point can calculate a corresponding I, which is a reconstructed shirt image. Each touch input point may be distributed in the area of the point-and-reconstruction shirt image and the first-mirror camera unit/second mirror image unit. The result is 29 201207702. The touch input point with higher similarity may be divided into ships. Determine the location of the touch input point. Therefore, the method provided by the present invention can determine the position of the optical touch static actually being multi-touched, and solve the problem that the virtual optical touch technology causes the virtual touch point in the multi-touch situation. The above is only the preferred embodiment of the present invention, and all modifications made to the scope of the present invention are within the scope of the present invention. [Simple description of the figure] Fig. 1 is a schematic diagram showing the virtual touch point formed by the blocking of multiple optical paths in the case of multi-touch in the case of multi-touch. Figure 2 is a schematic illustration of an embodiment of an optical touch screen of the present invention. Fig. 3 is a schematic view showing an embodiment of the optical module of the optical touch screen of Fig. 2 as a retroreflector. Fig. 4 is a schematic diagram showing that the processing of Fig. 2 is based on the preset angle _ the first camera unit = the touch input county image and the second camera unit _ input point image are respectively converted into angles. Figure 5 is a flow chart of the method for determining the position of the actual touched position of the touch screen by Ronben (4). Fig. 6 is a view for explaining the ratio of the image of the candidate region captured by the first-mirror camera unit in the calculation of Fig. 2; Figure 7 is a diagram for calculating the ratio of the candidate regions captured by the images taken by the first and second mirror images as early as in Fig. 2. 201207702 Fig. 8 is a schematic diagram for explaining the consistency verification using the image sections corresponding to the first camera unit and the second camera unit. Fig. 9 is a view for explaining the case where the second image pickup unit is used to perform the coincidence verification erroneous determination using only the first image pickup unit. First, a schematic diagram for verifying the consistency of each candidate region by the first imaging unit, the second imaging unit, the first-mirror imaging unit 7L, and the second mirror imaging unit will be described. Lu. 11 is a flow chart for explaining the method for determining the position where the indication area of the optical touch screen is actually touched by the first mirror unit or the second mirror unit. Figure 12 is a schematic diagram illustrating a possible distribution area of a touch input point. Figure 13 is a diagram illustrating the generation of an inscribed circle in a candidate region. The figure is a schematic diagram illustrating that the inscribed circle of the candidate region corresponds to the pixel position of the first image capturing unit. Figure I5 is a schematic diagram showing the similarity between the first virtual point image and the first virtual point reconstructed image. [Main component symbol description] Light source Realized touch point Virtual touch point Optical touch screen LG, LG1, LG2 ΟΡ1, ΟΡ2 GP1, GP2 2

3131

Pa、Pb、Pc、Pd RR 0a、0b、0c、θ<3、0a_s、0a_e、 Observed AC S、Θ1 201207702 20 22 24 26 27 28 202 204 206 208 Cl C2 20, 22, 24, 01'02 (A) (B) 指示區域 第一攝像單元 第二攝像單元 鏡面 發光模組 處理單元 左緣 下緣 右緣 上緣 左上隅角 右上隅角 鏡像指不區域 第一鏡像攝像單元 第二鏡像攝像單元 觸控輸入點 鳥峨圖 剖面圖 觸控輸入點影像 回射器 夾角Pa, Pb, Pc, Pd RR 0a, 0b, 0c, θ < 3, 0a_s, 0a_e, Observed AC S, Θ1 201207702 20 22 24 26 27 28 202 204 206 208 Cl C2 20, 22, 24, 01'02 ( A) (B) indication area first camera unit second camera unit mirror illumination module processing unit left edge lower edge right edge upper edge upper left corner upper right corner mirror image no area first mirror camera unit second mirror camera unit touch Control input point bird map cross-section touch input point image retroreflector angle

32 ⑧ 20120770232 8 201207702

Observed AC E、 θ〇一AC_start、 0Cr_AC_end A、B C、D E、F、GI小KAC、BD、BC、AD、ACEI、 BCFJ、BDGK、ADGI Pa_S、Pa_e、PI Expected_AC_S、 PI Expected_AC一E、PI Observed_AC_S、PI Observed—AC_E、 PCr_AC_start ' Pcr_AC_end、Pcr_BD_start、 Pcr_BD_end (xc,yc)5、11 500、502、504、506、508、 510、1100、1102、1104、 1106、1108、1110、1112、 1114 PI Expected_AC PI Observed AC 第一影像區間 第二影像區間 第三影像區間 第四影像區間 候選區域 像素起始位置 中心點 方法 步驟 候選區域AC之第一預期像 素長度 候選區域AC之第一觀測像 33 201207702 素長度Observed AC E, θ〇_AC_start, 0Cr_AC_end A, BC, DE, F, GI small KAC, BD, BC, AD, ACEI, BCFJ, BDGK, ADGI Pa_S, Pa_e, PI Expected_AC_S, PI Expected_AC-E, PI Observed_AC_S, PI Observed_AC_E, PCr_AC_start 'Pcr_AC_end, Pcr_BD_start, Pcr_BD_end (xc, yc) 5, 11 500, 502, 504, 506, 508, 510, 1100, 1102, 1104, 1106, 1108, 1110, 1112, 1114 PI Expected_AC PI Observed AC first image interval second image interval third image interval fourth image interval candidate region pixel start position center point method step candidate region AC first expected pixel length candidate region AC first observation image 33 201207702 prime length

θ 1 Expected_AC θ 1 〇bserved_AC P2 Expected_AC Θ 2 Expected_AC Θ 2 〇bserved_ACθ 1 Expected_AC θ 1 〇bserved_AC P2 Expected_AC Θ 2 Expected_AC Θ 2 〇bserved_AC

(xl,yl)、(x2, y2)、(x3, y3)、 (x4, y4)、(x2, y2)、(x4, y4) P1_AC(xl,yl), (x2, y2), (x3, y3), (x4, y4), (x2, y2), (x4, y4) P1_AC

θ 1 Expected_BD θ 1 Expected_BC θ 1 Expected_ADθ 1 Expected_BD θ 1 Expected_BC θ 1 Expected_AD

PI BD 候選區域AC之第一預期夾 角 候選區域AC之第一觀測夾 角 候選區域A C之第二預期像 素長度 候選區域AC之第二預期夾 角 候選區域AC之第二觀測夾 角 端點 候選區域AC之第一涵蓋比 例 候選區域B D之第一預期夾 角 候選區域BC之第一預期夾 角 候選區域AD之第一預期夾 角 候選區域BD之第一涵蓋比 201207702 例 P2AC 候選區域AC之第二涵蓋比 例 W1 > W2 權重 D1 > D2 距離 easel 、 case2 情況 P 整體涵蓋比例 G1 第一虛點影像 G2 第二虛點影像 R1 第一重建影像 R2 第二重建影像 RI1 第一虛點重建影像 RI2 第二虛點重建影像 Cr 内切圓 R 半徑 L 直線 dl、d2、d3、d4 距離 Cr_pl、Cr_p2 切點位置 ^Cr_AC 候選區域AC之相切夾角 ^Cr_BD 候選區域BD之相切夾角 ^Cr_BC 候選區域BC之相切夾角 θ〇· AD 候選區域AD之相切夾角 35 201207702 SI、S2、S Ovl、Ov2 N1 > N2 相似程度 重疊部分 非重疊部分The second expected angle candidate region AC of the first expected angle candidate region AC of the first expected angle candidate region AC of the PI BD candidate region AC is the second expected angle candidate region AC of the second observation angle candidate region AC The first coverage ratio of the first expected angle candidate area BD of the first expected angle candidate area AD of the first expected angle candidate area BC of the ratio candidate area BD is greater than the second coverage ratio of the 201207702 example P2AC candidate area AC W1 > W2 Weight D1 > D2 Distance easel, case2 Case P Overall coverage ratio G1 First virtual point image G2 Second virtual point image R1 First reconstructed image R2 Second reconstructed image RI1 First virtual point reconstructed image RI2 Second virtual point reconstructed image Cr inscribed circle R radius L line dl, d2, d3, d4 distance Cr_pl, Cr_p2 tangent point position ^Cr_AC tangential angle of candidate area AC ^Cr_BD tangential angle of candidate area BD ^Cr_BC tangential angle θ of candidate area BC · The tangent angle of AD candidate area AD 35 201207702 SI, S2, S Ovl, Ov2 N1 > N2 similarity overlap part of non Overlapping part

Claims (1)

201207702 七、申請專利範圍: 1. -種判斷複數_控輸人闕碰光_控螢幕之指示區域實 際被觸控的位置之方法,該光學觸控螢幕包含-第-攝像單元 及第一攝像單元’其向該指示區域擷取影像;-發光模組, 用以將光導入5亥指不區域内再被該第一與第二攝像單元成 測;及-鏡面’相對該第一與第二攝像單元設置,該方法包含: φ 使用該第一攝像單元向該指示區域擷取影像產生-第—實點 使用該第二攝像單元向該指示區域娜影像產生-第二實點 影像,其中該第-實點影像與該第二實點影像對應該指示 區域產生複數個候選區域; 於該複數健舰域巾縣_待_區域; 透過該鏡面於該待偵測區域擷取觸控輸入點影像產生一201207702 VII. Patent application scope: 1. - The method of judging the plural _ controlling the input 阙 光 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a unit that captures an image from the indication area; a light-emitting module for introducing light into the area of the 5th finger and then being measured by the first and second camera units; and - the mirror surface relative to the first and the second The second camera unit is configured to: φ use the first camera unit to capture image generation from the indication area - the first real point uses the second camera unit to generate a second real point image to the indicator area image, wherein The first real image and the second real image correspond to the region to generate a plurality of candidate regions; and the plurality of candidate regions are captured in the region to be detected through the mirror Point image generation 2. 雜特I並根據該擷取影像特徵判斷該待偵測區域是否 貫際對應勒I數個觸控輸人點其+之—。 " 如請求項1所叙方法,其巾辟 控輸入點產生複數個第—影像區間,亨第=_趨數個觸 數個觸控輸入點產生複數個第二影像叫每 =第-影像區間中之—與複數個第二影像_中之二/ 產生,以及該複數個候選區域根據對 集 個觸控機率,該方法進—步包含:w W像特徵產生複數 37 201207702 挑選一觸控機率最低的候選區域進行刪除法,其中,刪除法包 含:判斷該觸控機率最低的候選區域以外的其他候選區域 的複數個難因子是否包含該複數個第1像區間與該 複數個第一影像區間,若是,則該觸控機率最低的候選區 域非對應觸控輸入點將進行刪除。 3. 如請求項1所述之方法,其中該第一實點影像根據該複數個觸 控輸入點產生複數個第-影像區間,該第二實點影像根據該複 數個觸控輸人點產生複數個第二影像區間,以及該複數個候選 區域根據對應擷取影像特徵產生複數個觸控機率,該方法進一 步包含: 使用遠第-攝像單元透過該鏡面向該指示區域擷取影像產生 -第-虛點影像,其具有複數個第三影像區間,其中每一 候選區域由複數個第一影像區間t之-、複數個第二影像 區間中之一與複數個第三影像區間中之-聯集產生; 挑選-觸控機率最低的候選區域進行刪除法,其中,刪除法包 含:判斷簡控機率最低的候顧域以外的其他候選區域 的複數個難因子是否包含該複數個第―、第二與第三影 像區間’右疋,則該觸控機率最低的候選區域非對應觸控 輸入點將進行刪除。 4. =請求項i所述之方法,其中,該第—攝像單元及該第二攝像 早兀设置於該指示區域之左上角的外側及右上角的外側,該發 201207702 光模、、且包έ . 一上發光模組,設於該指示區域之上^一 側發光模組及-第二側發光模組,設於該指示區域之左緣及右 緣;一下發光模組,設於該指示區域之下緣,以及該鏡面設於 該指示區域之下緣,其中使用該第一攝像單元產生該第一實點 影像及該虛點影像包含: 致能該第一攝像單元; 於致能該第-攝像單元時,開啟該下發光模組及該第二側發光 鲁 模組’並使用該第一攝像單元擷取該第-實點影像,其中 °亥下毛光模組及5彡第—側發光模組係於相異時段開啟丨及 於致能該第-攝像單元時,開啟該上發光模組及該第二側發光 模組,並朗該第-攝像單元透财鏡面娜—第三影 像,根據該第-實點影像與該第三影像得到該虛點影像, 其中該上發^^模組及該第二側發光模組係於相異時段開 啟。 # 5.如請求項4所述之方法,其中使用該第二攝像單元產生該第二 實點影像包含: 致能該第二攝像單元;及 於致能該第二攝像單元時,開啟該下發光模組及該第一側發光 模組,並使用該第二攝像單元擷取該第二實點影像,其中 s亥下發光模組及該第一側發光模組係於相異時段開啟。 • 6.如請求項丨所述之綠,其巾透過纖面_待彳貞_域掘取 39 201207702 觸控輸入點麟,並根義取影像特__ 貫際對應該複數個觸控輪入點其中之—包八· ^疋否 根據該待_區域及該第-攝像翠元‘二稱位置產生一 預期夾角; 於該^偵測區域内使用該第一攝像單元透過該鏡面操取觸控 =讎卜崎肖;__㈣與該觀測 夾角產生該擷取影像特徵。 如請求項1所述之方法,其中透過 觸押輪人點〜f 該待細區域擷取 觸控輪入.〜像,雜侧取影像特徵 實際對應該複數個觸控輸人點其中之—包含;^偵孤域疋否 根據該待偵測區域及該第-攝像單元的:對稱位置產生一 預期像素影像區域; 於該^==内=用該第一攝像單元透過該鏡面操取觸控 輸入點衫像產生-觀測像素影像區域.及 根據ΐΓΖ絲像㈣触躺像絲細域私該摘取 區域操取 啁雜入〜像,雜據齡影像敏 -單元的鏡面對稱位置 產生一 實際對應歸數侧嫌人點其巾之—包含μ ’、" °°°域疋否 根據該待偵測區域及該第一攝像 第一預期夾角; 8. 201207702 面對稱位置產生一 根據該待偵測區域及該第二攝像單元的鏡 第二預期夾角; 於该待谓測區域内使用該第—播傻置 ^ t y 轉早疋透過該鏡面操取觸控 輸入點影像產生一第一觀測夾角; 於該待偵測區域内使用該第二攝像單 僻彳冢早7L透過該鏡面擷取觸控 輸入點影像產生一第二觀測夾角, · 根據該第-難夾賴該第—__可得—第—比率 -預期夾角與該第二觀測夾角可得—第二比率; 根據該待_區域與該第—攝像單元的鏡面對稱位置之第一 虛擬距離及該候選物件與該第二攝像單元的鏡面對稱位 置之第二虛擬距離,產生對應於雜選物件之第一比率的 權重及第二比率的權重;及 根據該第-比率、該第二比率、該第—比率的權重及第二比率 的權重產生該擷取影像特徵。 9,-種崎魏個觸控輸人關碰光學觸控螢幕之指示區域實 際被觸控的位置之方法,該光學觸控螢幕包含—第__攝像單元 及第一攝像單元,其向該指示區域操取影像;一發光模組, 用以將光導入該指示區域内再被該第一與第二攝像單元感 測;及一鏡面’相對該第一與第二攝像單元設置,該方法包含: (a)使用該第—攝像單元向該指示區域拍頁取影像產生-第-實 點影像; () Λ第攝像早元透過該鏡面向該指示區域操取影像產 201207702 生一第一虛點影像; . ⑹使用該第二攝像單元向該指示區域掏取影像產生一第二實 像其中3玄第一影像與該第二影像對應該指示區域產 生複數個候選區域; ⑼根據該複數個候舰域中至少—區域產生—實點可能分佈 區域; (e) 根據該實點可能分佈區域對應該鏡面產生一第一重建影 像’違第-實點影像與該第一重建影像產生一第一虛點重 建影像;及 _ (f) 根據4第-虛點f彡像_第—虛點重建影像繼該實點可 能分佈區域是硕應複數個馳輸人點至少一。 10·如請求項9所述之方法,其中步驟(d),實點可能分佈區域進一 步根據5玄第一貫點影像或該第二實點影像決定。 11.如請求項9所述之方法’其中步驟(d),實點可能分佈區域進一鲁 步根據該第一實點影像及該第一虛點影像決定。 12·如請求項9所述之方法,其中,該第一攝像單元及該第二攝像 單元設置於該指示區域之左上角的外侧及右上角的外側,該發 光模組包含:一上發光模組,設於該指示區域之上緣;一第一 側發光模組及一第二側發光模組,設於該指示區域之右緣及左 緣;一下發光模組,設於該指示區域之下緣,以及該鏡面設於 · 42 ⑧ 201207702 該指示區域之下緣,其中使用該第一攝像單元產生該第一實點 衫像及0玄第一虛點影像包含: 致能該第一攝像單元; 於致能該第-攝像單元時,開啟該下發光模組及該第一側發光 模組’並使用該第-攝像單元她該第—實點影像,其中該 下發光模組錢第-嫌光模_於相異時段開啟;及 於致能該第-攝像單元時,開啟該上發光模組及該第一側發光 鲁模組,並使用該第-攝像單元掏取該第一虛點影像,其中該 上發光模組及該第一側發光模組係於相異時段開啟。 13·如請求項9所述之方法,其中,該第一攝像單元及該第二攝像 單元設置於該指示區域之左上角的外側及右上角的外側,該發 光模組包含:一上發光模組,設於該指示區域之上緣;一第一 側發光模組及一第二側發光模組,設於該指示區域之右緣及左 緣;一下發光模組,設於該指示區域之下緣,以及該鏡面設於 • 豸指示11域之下緣’其中使用該第二攝像單元產生該第二實點 影像包含: 致能該第二攝像單元;及 於致能該第二攝像單元時,開啟該下發光模組及該第二側發光 模組,並使用該第二攝像單元擷取該第二實點影像,其中該 下發光模組及該第二側發光模組係於相異時段開啟。 14.如睛求項9所述之方法,其中根據該複數個候選區域中至少一 43 201207702 區域產生-實點可能分佈區域包含: 根據該第一影像的實點數目、 ⑶ 可能分佈區域數目;及 衫像的實點數目產生實點 咖及該實點 品°目’產切複數個實點可能分佈區域。 15. 如請求項10所述之方法,射(e)包含· =點:能分佈區域中每一區域產生實質内切圓; 根據^-娜單元_蝴__切得到一相切角 度;及 根據該相切角度對應該攝像單^像素位置產生第一重 像。 ’ 16•如請求項10所述之方法,其中_含:比較該第—虛點影像 與J虛點f建轉的影像她度繼該實料能分佈區域是 否對應複數個觸控輸入點。 17.如請求項10所述之方法,其中步驟⑹中該第一重建影像係透 過該第一攝像單元對應該鏡面位置產生,該方法更包含: 使用該第二攝像單元向該指示區域擷取影像產生一第二實點 影像; ” 使用该第二攝像單元透過該鏡面向該指示區域擷取影像產生 第一虛點影像; 201207702 根據該實點可能分佈區域與該第二攝像單元對應該鏡面位置 產生一第二重建影像,該第一實點影像與該第二重建影像 產生一第二虛點重建影像;及 根據該第一虛點影像、該第一虛點重建影像、該第二虛點 影像及該第二虛點重建影像判斷該實點可能分佈區域是 否對應複數個觸控輸入點。 八、圖式:2. According to the captured image feature, it is determined whether the area to be detected corresponds to a number of touch input points. " As in the method described in claim 1, the input control point of the towel generates a plurality of image-image intervals, and the number of touch input points generated by the Hendy=_the number of touch inputs generates a plurality of second images called each image. In the interval - and the second of the plurality of second images _ are generated, and the plurality of candidate regions are based on the set of touch probability, the method further comprises: w W image feature generation complex number 37 201207702 Picking a touch The deletion method is performed in the candidate region with the lowest probability, wherein the deletion method includes: determining whether the plurality of difficult factors of the candidate regions other than the candidate region having the lowest touch probability include the plurality of first image regions and the plurality of first images The interval, if yes, the non-corresponding touch input point of the candidate area with the lowest touch probability will be deleted. 3. The method of claim 1, wherein the first real-point image generates a plurality of first-image intervals according to the plurality of touch input points, and the second real-point image is generated according to the plurality of touch input points a plurality of second image intervals, and the plurality of candidate regions generate a plurality of touch chances according to the corresponding captured image features, the method further comprising: using the far-camera unit to capture image generation through the mirror toward the indication region - a virtual point image having a plurality of third image intervals, wherein each candidate region is formed by a plurality of first image interval t-, one of a plurality of second image intervals, and a plurality of third image intervals The set-selection-the candidate region with the lowest touch probability is deleted, wherein the deletion method includes: determining whether the plurality of difficult factors other than the candidate region having the lowest probability of the simple control include the plurality of the first and the The second and third image sections are 'right', and the candidate area with the lowest touch probability is not corresponding to the touch input point. 4. The method of claim i, wherein the first camera unit and the second camera are disposed outside the outer left and upper right corners of the upper left corner of the indication area, the hair is issued, and the package is 201207702 An upper illumination module is disposed on the indication area, and the second side illumination module and the second side illumination module are disposed on the left edge and the right edge of the indication area; the lower illumination module is disposed on the a lower edge of the indication area, wherein the mirror is disposed at a lower edge of the indication area, wherein the first real image is generated by using the first camera unit and the virtual image includes: enabling the first camera unit; When the first camera unit is turned on, the lower light emitting module and the second side light emitting module are turned on and the first image capturing unit is used to capture the first real image, wherein the light module and the fifth layer are The side light-emitting module is turned on in the different time period and when the first camera unit is enabled, the upper light-emitting module and the second-side light-emitting module are turned on, and the first-camera unit is transparent to the mirror surface. a three image according to the first real image and the third image To the virtual image point, wherein the hair ^^ module and the second side of the light-emitting module based on different turn on period. The method of claim 4, wherein the generating the second real image by using the second camera unit comprises: enabling the second camera unit; and enabling the second camera unit to enable the second camera unit The illuminating module and the first illuminating module are used to capture the second real image, wherein the s illuminating module and the first illuminating module are turned on in different time periods. • 6. As requested in the item 丨 green, the towel through the fiber _ _ _ _ domain dig 39 201207702 touch input point lin, and the root of the image __ contiguous corresponding to multiple touch wheel The entry point is - the package VIII ^ ^ 产生 according to the _ area and the first - camera Cuiyuan 'two position to generate an expected angle; in the detection area using the first camera unit through the mirror Touch=雠卜崎肖; __(4) and the observation angle produces the captured image feature. The method of claim 1, wherein the touch wheel wheel is touched by the touch wheel person point ~ f, and the image side feature is actually corresponding to the plurality of touch input points - including Detecting whether or not to generate an expected pixel image area according to the symmetrical position of the to-be-detected area and the first camera unit; in the ^===using the first camera unit to perform touch through the mirror Input the shirt image to generate - observe the pixel image area. And according to the silk image (4) touch the image, the fine area is private, the picking area is operated, the mixed image is mixed into the image, and the mirror-symmetric position of the unit is generated. Corresponding to the number of suspects on the side of the singularity - including μ ', " ° ° ° 疋 根据 according to the area to be detected and the first expected first angle of the first camera; 8. 201207702 face symmetrical position generated according to the a second expected angle of the detection area and the mirror of the second camera unit; using the first broadcast in the area to be tested to generate a first observation through the mirror to obtain a touch input point image Angle; in the area to be detected In the domain, the second camera is used to capture a touch input point image through the mirror to generate a second observation angle, and according to the first-difficulty, the first___ is available - the first ratio - the expected An angle between the included angle and the second observation is obtained - a second ratio; a first virtual distance between the to-be-image region and a mirror-symmetric position of the first camera unit and a mirror-symmetric position of the candidate object and the second camera unit a second virtual distance, generating a weight corresponding to the first ratio of the hashed object and a weight of the second ratio; and generating the 根据 according to the first ratio, the second ratio, the weight of the first ratio, and the weight of the second ratio Take image features. 9, the method of the touch screen of the optical touch screen is actually the position of the touched area of the optical touch screen, the optical touch screen includes a __ camera unit and a first camera unit, the indication is directed to The area is operated by the image; a light emitting module is configured to introduce light into the indication area and then sensed by the first and second camera units; and a mirror surface is disposed relative to the first and second camera units, the method comprising : (a) using the first camera unit to take a picture to the indication area to take an image-produced-first-real image; () Λ first camera early through the mirror facing the indication area to manipulate the image production 201207702 Point image; (6) using the second camera unit to capture an image from the indication area to generate a second real image, wherein the 3 first image and the second image correspond to the region to generate a plurality of candidate regions; (9) according to the plurality of candidates At least the region in the ship domain - the real point may be distributed; (e) according to the real distribution of the real point corresponding to the mirror surface to generate a first reconstructed image 'violation - real image and the first reconstructed image generated The first tentative point to reconstruct the image; and _ (f) The 4 - of a virtual image point f San _ - of the following may be solid dots dotted distribution area should be large reconstructed image is a plurality of at least one override input point. 10. The method of claim 9, wherein in step (d), the real point possible distribution area is further determined based on the 5th first point image or the second real point image. 11. The method of claim 9 wherein step (d), the real point possible distribution area is further determined based on the first real point image and the first virtual point image. The method of claim 9, wherein the first camera unit and the second camera unit are disposed outside the upper left corner and the upper right corner of the upper left corner of the indication area, and the light emitting module comprises: an upper light module The first light emitting module and the second side light emitting module are disposed on the right edge and the left edge of the indication area; the lower light emitting module is disposed in the indicating area a lower edge, and the mirror is disposed at a lower edge of the indication area, wherein the first camera unit is used to generate the first real shirt image and the 0 first virtual point image includes: enabling the first camera a unit; when the first camera unit is enabled, the lower light emitting module and the first side light emitting module are turned on and the first image unit is used by the first camera unit, wherein the lower light module When the first camera unit is enabled, the upper light emitting module and the first side light emitting module are turned on, and the first camera unit is used to capture the first a virtual image, wherein the upper illumination module and the first side Open period based on different modules. The method of claim 9, wherein the first camera unit and the second camera unit are disposed outside the upper left corner and the upper right corner of the upper left corner of the indication area, and the light emitting module comprises: an upper light emitting module The first light emitting module and the second side light emitting module are disposed on the right edge and the left edge of the indication area; the lower light emitting module is disposed in the indicating area a lower edge, and the mirror is disposed at a lower edge of the 豸 indication 11 field, wherein the generating the second real image by using the second camera unit comprises: enabling the second camera unit; and enabling the second camera unit Turning on the lower light emitting module and the second side light emitting module, and using the second image capturing unit to capture the second real point image, wherein the lower light emitting module and the second side light emitting module are tied to each other Open at different times. The method of claim 9, wherein the at least one of the plurality of candidate regions generates a real-point possible distribution region according to the region: the number of real points according to the first image, and (3) the number of possible distribution regions; The number of real points of the shirt image produces a real point coffee and the actual point product. The production and cutting of a plurality of real points may be distributed. 15. The method of claim 10, wherein the shot (e) comprises a == point: each region in the energy distribution region produces a substantial inscribed circle; and a tangent angle is obtained according to the ^-na unit_butter__; and A first ghost image is generated corresponding to the camera pixel position according to the tangent angle. The method of claim 10, wherein the _ includes: comparing the image of the first virtual point image with the J virtual point f, and whether the corresponding area of the physical energy distribution area corresponds to the plurality of touch input points. The method of claim 10, wherein the first reconstructed image is generated by the first camera unit corresponding to the mirror position in the step (6), the method further comprising: using the second camera unit to capture the indicator area The image generates a second real-point image; ” using the second camera unit to capture the image through the mirror to generate the first virtual point image; 201207702 according to the real point possible distribution area and the second camera unit corresponding to the mirror Positioning a second reconstructed image, the first real image and the second reconstructed image generating a second virtual point reconstructed image; and reconstructing the image according to the first virtual image, the first virtual image, and the second virtual image The point image and the second virtual point reconstructed image determine whether the real distribution area of the real point corresponds to a plurality of touch input points. 4545
TW99126732A 2010-08-11 2010-08-11 Method for determining positions of touch points on an optical touch panel TWI423099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99126732A TWI423099B (en) 2010-08-11 2010-08-11 Method for determining positions of touch points on an optical touch panel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99126732A TWI423099B (en) 2010-08-11 2010-08-11 Method for determining positions of touch points on an optical touch panel

Publications (2)

Publication Number Publication Date
TW201207702A true TW201207702A (en) 2012-02-16
TWI423099B TWI423099B (en) 2014-01-11

Family

ID=46762285

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99126732A TWI423099B (en) 2010-08-11 2010-08-11 Method for determining positions of touch points on an optical touch panel

Country Status (1)

Country Link
TW (1) TWI423099B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI464650B (en) * 2011-12-02 2014-12-11 Wistron Corp Optical touch module and related method of rotary angle adjustment
TWI465988B (en) * 2012-04-13 2014-12-21 Era Optoelectronics Inc Laser scanning input device
US9116574B2 (en) 2013-10-07 2015-08-25 Wistron Corporation Optical touch device and gesture detecting method thereof
CN105653101A (en) * 2014-12-03 2016-06-08 纬创资通股份有限公司 Touch point sensing method and optical touch system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4393030B2 (en) * 2000-04-14 2010-01-06 富士通株式会社 Optical position detection device and recording medium
US20030234346A1 (en) * 2002-06-21 2003-12-25 Chi-Lei Kao Touch panel apparatus with optical detection for location
TWI362608B (en) * 2008-04-01 2012-04-21 Silitek Electronic Guangzhou Touch panel module and method for determining position of touch point on touch panel
TWM379804U (en) * 2009-09-30 2010-05-01 Cun Yuan Technology Co Ltd Optical position detecting device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI464650B (en) * 2011-12-02 2014-12-11 Wistron Corp Optical touch module and related method of rotary angle adjustment
TWI465988B (en) * 2012-04-13 2014-12-21 Era Optoelectronics Inc Laser scanning input device
US9116574B2 (en) 2013-10-07 2015-08-25 Wistron Corporation Optical touch device and gesture detecting method thereof
TWI502413B (en) * 2013-10-07 2015-10-01 Wistron Corp Optical touch device and gesture detecting method thereof
CN105653101A (en) * 2014-12-03 2016-06-08 纬创资通股份有限公司 Touch point sensing method and optical touch system
CN105653101B (en) * 2014-12-03 2018-04-17 纬创资通股份有限公司 Touch point sensing method and optical touch system

Also Published As

Publication number Publication date
TWI423099B (en) 2014-01-11

Similar Documents

Publication Publication Date Title
US9977543B2 (en) Apparatus and method for detecting surface shear force on a display device
US20190034689A1 (en) Method and system for optical imaging using patterned illumination
CN105579929B (en) Human-computer interaction based on gesture
TWI471784B (en) Optical position input system and method
TW201214243A (en) Optical touch system and object detection method therefor
CN108369471A (en) Mobile device with the display covered by least optical sensor
CN107076859B (en) With the shared pet detector scintillator arrangement with depth of interaction estimation of light
JP2017516208A5 (en)
CN105224845A (en) Identity recognition device and manufacture method, personal identification method
TW201207702A (en) Method for determining positions of touch points on an optical touch panel
CN102915557B (en) Image processing system, termination and method
KR20160088885A (en) Optical Eye Tracking
TW201015404A (en) Optical touch display device, optical touch sensing device and touch sensing method
JP2010055266A (en) Apparatus, method and program for setting position designated in three-dimensional display
TW201040581A (en) Digital image capturing device with stereo image display and touch functions
CN109670390A (en) Living body face recognition method and system
TW201203055A (en) Multiple-input touch panel and method for gesture recognition
TW201113786A (en) Touch sensor apparatus and touch point detection method
TW201207673A (en) Lift detection method for optical mouse and optical mouse using the same
TW201112092A (en) Optical touch system and method thereof
TW201122941A (en) Method of determining pointing object position for three dimensional interaction display
TWI790449B (en) Fingerprint identification device and fingerprint identification method
TW201214239A (en) Input detection device, input detection method, input detection program, and computer readable media
TW201118665A (en) Object-detecting system
TW201118849A (en) Information input device, information input program, and electronic instrument

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees