TW201239709A - Optical touch device and shelter identification method thereof - Google Patents

Optical touch device and shelter identification method thereof Download PDF

Info

Publication number
TW201239709A
TW201239709A TW100110162A TW100110162A TW201239709A TW 201239709 A TW201239709 A TW 201239709A TW 100110162 A TW100110162 A TW 100110162A TW 100110162 A TW100110162 A TW 100110162A TW 201239709 A TW201239709 A TW 201239709A
Authority
TW
Taiwan
Prior art keywords
image
touch
connection
area
image sensor
Prior art date
Application number
TW100110162A
Other languages
Chinese (zh)
Inventor
ming-shan Liu
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to TW100110162A priority Critical patent/TW201239709A/en
Publication of TW201239709A publication Critical patent/TW201239709A/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

An optical touch device and a shelter identification method thereof are provided. The optical touch device includes a first image detector, a second image detector, and a control module. The control module decides a shelter area on the optical touch device of a shelter object according to the position of the first and second image detector and their detection results. The control module also uses a touch algorithm to predict a movable object's moving trace after entering the shelter area.

Description

201239709 riN/VL-ΛΛ^201239709 riN/VL-ΛΛ^

-0105-TWXX 37094tw£doc/n 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種觸控裝置,且特別是有關於一種 光學式觸控裝置的遮蔽區域辨識方法。 【先前技術】 現今各類電子裝置的觸控螢幕大致分成電阻式、電容 式、表面聲波式以及光學式等感測方式。其中光學式觸控 螢幕採用光學感應器來债測觸控點位置,使用者只需輕微 碰觸到觸控螢幕’就可得到光學感測器的回應,而且使用 手指、筆刷、觸控筆等輸入之物件均可致能。光學式觸控 螢幕的表面沒有塗料層覆蓋,因此可提供清晰之影像,此 外,在使用者對觸控螢幕進行觸控時,留下的擦痕亦不會 衫響觸控螢幕的使用,此項優點在觸控螢幕使用頻率較高 的么共%;1 兄,例如提款機或多媒體資訊站之觸控介面,更 顯其重要性。 寸然而,現今市面上之觸控螢幕產品皆是以無遮蔽物為 之下進行觸控位置之判斷。但在觸控螢幕上出現不 2件時’由於不明物件遮蔽住部分光學感㈣器所感測之 將會導致整體觸控螢幕之觸控機制對使用者的觸控 【發明内容】 1 〇 5 -Τ WXX 37094twf. doc/n 201239709 本發明提供-種光學觸控裝置,包括第 器、第二職制η及㈣漁。其巾 器設置於基板_控區域,以感測遮蔽物件於此= &域之第m影像感·設置於基板_控區域 周圍,以感測遮蔽物件於此觸控區域第 組輛接此第-影像感測器與此第二影像感二二= 依據上述第-影像感測II的位置與上述第__影像的第一邊 緣决疋第一連線,依據上述第一影像感測器的位置與上述 第二影像的第二邊緣決定第二連線,依據上述第二影像感 測器的位置與上述第二影像的第三邊緣決定第三連線依 據上述第二影像感測器的位置與上述第二影像的第四邊緣 決定第四連線,以及利用上述第一連線、第二連線、第三 連線及第四連線與上述觸控區域的邊緣定義至少一遮蔽區 域。 、,本發明提供一種光學式觸控裝置的遮蔽辨識方法,此 光學式觸控裝置包括配置於基板之觸控區域周圍的第一影 像感測器與第二影像感測器,所述遮蔽辨識方法包括由第 一影像感測器感測遮蔽物件於此觸控區域的第一影像,由 第二影像感測器感測此遮蔽物件於此觸控區域的第二影 像’依據上述第一影像感測器的位置與上述第一影像的第 一邊緣決定第一連線,依據上述第一影像感測器的位置與 上述第一影像的第二邊緣決定第二連線,依據上述第二影 像感測器的位置與上述第二影像的第三邊緣決定第三連 線’依據上述第二影像感測器的位置與上述第二影像的第 201239709 riMm-Au-0105-TWXX 37094twf.d〇c/n 四邊緣決定第四連線,以及利用上述第一連線、第二連線、 第二連線及第四連線與上述觸控區域的邊緣定一 蔽區域。 获乂如 + ^在本,明之一實施例中,依據移動物件在進入上述遮 蔽區域之前的第-移動軌跡,利用觸控演算法預測此移動 物件進入上述遮蔽區域中的第二移動執跡。 基於上述,本發明提供一種光學式觸控裝置及其遮蔽 辨4方法,影像感測器感測遮蔽物件於觸控區域的影像, 並利用上述影像的邊緣決定遮蔽區域。在決定遮蔽區域之 後,利用觸控演算法根據物件進入遮蔽區域之前的移動軌 跡,預測物件進入遮蔽區域之後的移動執跡。 為讓本發明之上述特徵和優點能更明顯易懂,下文特 舉實施例’並配合所附圖式作詳細說明如下。 【實施方式】 圖1Α繚示為本發明一實施例的一種光學式觸控裝置 100之功能方塊示意圖。圖1Β繪示為依據本發明實施例說 明圖1Α所示光學式觸控裝置1〇〇之佈局俯視示意圖。請 參照圖1Α與圖1Β,光學式觸控裝置1〇〇可包括(但不侷 限於)基板102、第一影像感測器110、第二影像感測器 U0、控制模組130以及資料庫140。光學式觸控裝置1〇〇 可為平板電腦、智慧型手機或是其他具有光學式觸控面板 的電子裝置’例如具有光學式觸控功能的桌面。基板1〇2 可以是任何剛性或軟性材質的平面基板,例如塑膠基板、 -U105-TWXX 37094twf.doc/n 201239709 金屬基板等。另外’基板102可以是透明或不透明基板β 例如,若將本實施例應用於觸控顯示裝置,則上述基板1〇2 可以是顯示面板或顯示元件。若將光學式觸控襞置1〇〇應 用於顯示元件’則可以使顯示元件具有觸控功能。其中, 顯示元件具有一顯示區域,基板1〇2的觸控區域114可以 是顯示面板的顯示區域。於本實施例中,基板1〇2可以是 桌面。 於本實施例中,基板102的觸控區域U4是矩形。在 其他實施例中,觸控區域114可以是其他幾何形狀。詳細 來說,觸控區域114的周圍包括側緣Α、側緣Β、側緣c 與側緣D。在本實施例中侧緣Α、側緣Β與侧緣匚上具有 線光源112,線光源112可例如為由沿側緣Α、側緣Β與 側緣C排列的多個點光源所構成,點光源所發出的光可例 如為可見光或是不可見光(例如紅外光)。其中線光源112 的實施方式並不以此植,舉說,亦可_導光板與 背光源來實現上述的線光源112。 控制模組130耦接第一影像感測器11〇、第二影像感 測器120以及資料$ 140。第-影像感測器11〇與第二影 像感測器120設置於觸控區域114的厢(在本實施例中為 位於端角P1與端角P4上’然不以此為限),以分別從不同 角度感測\拍攝-遮蔽物件T i於觸控區域丨丨4之一第一影 像:第一影像感測器110與第二影像感測 器12〇可k任何類韻光學成似置,例如攝影機、數 位相機、線影像感測器等。控制模組⑽根據第一影像感 6 201239709 πναι-λ^-0105-TWXX 37094twf.doc/n 測器110及第二影像感測器120的位置及感測結果決定f 蔽物件T1於此光學式觸控裝置100的至少一個遮蔽區 域,例如圖1B所示第一遮蔽子區域Z1及第二遮蔽子隱威 觸 Z2。在定義出遮蔽區域之後,此控制模組130並利用〆 控演算法’預測移動物件進入所述遮蔽區域後的移動轨 跡’詳細實施步驟將於以下敘述。 圖2A繪示依據本發明一實施例說明圖1B所示第〆影 像感測器110所感測\拍攝到遮蔽物件Ti的第一線段影像 之二意圖。圖2B繪示依據本發明一實施例說明圖1B所系 1二影像感測器120所感測\拍攝到遮蔽物件T1的第二線 羊又像之不意圖。請同時參照圖1B、圖2A及圖2B。第 办像感測器110與第二影像感測器120分別以不同角度 自攝觸控區域114。例如,第—影像感· ιι〇感别 而第_ ^區域114而獲得圖2A所示第一區段影像210, 2B Μ像感’則器120感測\拍攝觸控區域114而獲得圈 所不第二區段影像220。 器==\114具有遮蔽物件T1時,第-影像一 1像。攝到遮蔽物件T1於觸控區域U4之第 物件T1的第〜f 2八的第一區段影像210之中具有遮蔽 -與第二像-的第一邊緣 ::二影像感剛器120可以感 具:遮蔽物件η 觸控區域iH之第二影像。例如H到=物㈣於 的第一區段影像 201239709 ---------0105-TWXX 37094twf.doc/n 220之中具有遮蔽物件T1的第二影像225,其中第二影像 225的第三邊緣X3與第四邊緣χ4分別為由端角P4所觀 察到遮蔽物件T1的兩個側緣。 圖3繪示為本發明一實施例之控制模組130決定第一 連線之示意圖。P1〜P3均為觸控區域114的端角(如圖1B 所示)。由於觸控區域114為已知,因此線段P1-P3、線段 P1-P2及線段P2-P3均為已知’且線段pi-p3與線段P1-P2 的夾角α亦為已知。也就是說,線段PLP3的斜率為已知。 圖1Β與圖3所示第一區段影像210中位置Ρ2,、Ρ3,、Ρ4, 分別對應於觸控區域114中端角Ρ2、端角Ρ3、端角Ρ4的 位置’其中線段P2’~P3’的長度(即弧長)為已知。請參照圖 3 ’在第一影像感測器11〇感測到第一影像215後,夾角α 與夾角β的比值等於第一區段影像21〇中線段Ρ2,-Ρ3,的長 度與線段Ρ2’-Χ1的長度之比值,因此依據夾角α、線段 Ρ2’-Ρ3’的長度、線段P2’-Xl的長度計算出夾角沒。也就 是說,控制模組130可以計算出線段ρι_Β2的斜率。基於 端角Ρ1的座標與線段Ρ1-Β2的斜率,控制模組130可以 求得第一連線L1的方程式。 同理’控制模組130可依據線段ρι_ρ3與線段ρι_ρ4 的夾角、線段Ρ3’-Ρ4’的長度、線段χ2_Ρ4,的長度計算出 線段Ρ1-Χ2與線段Ρ1-Ρ4的夾角,也就是計算出線段ρι_Χ2 的斜率’也就是位置Ρ1至位置Cl之間線段pl_cl的斜 率。依據第一影像感測器110的位置P1與線段ρ1_χ2的 斜率,控制模組130可以求得第二連線L2的方程式。以 8 201239709 r ι^λι-λ^-0105-TWXX 37094twf.doc/n 此類推,控制模組130可以依據第二影像感測器12〇的位 置Ρ4與圖2Β所示第二影像225的第三邊緣χ3位置計算 出位置Ρ4至位置Α1之間線段Ρ4-Α1的斜率,進而依據第 一衫像感測态120的位置Ρ4與線段Ρ4-Α1的斜率決定第 二連線L3的方程式。以及,控制模組13〇可以依據第二 影像感測器120的位置Ρ4與第二影像225的第四邊緣χ4 位置計算出位置Ρ4至位置扪之間線段]?4_61的斜率,進 而依據第二影像感測器12〇的位置p4與線段p4_Bi的斜 率決定第四連線L4的方程式。 接下來,控制模組130可以利用第一連線第二連 線L2、第三連線L3以及第四連線L4與觸控區域的 邊緣定義至少-遮蔽區域^在本實施例中,控制模組⑽ 可以利用目1B所示第一連線u '第二連線u、第四連線 L4以及觸控區域114的邊緣定義第—遮蔽子區域ζι,以 及利用第三連線L3、第四連線[4、第一連線以與觸控區 域m的邊緣定義第二遮蔽子區域Z2。當移動物件叫例 如手私或疋觸控筆)進入第一遮蔽子區域Z1或是第二遮蔽 子區域Z2時,由於移動物件m會被遮蔽物件τι所遮蔽, ,此控制模組130無法精確計算出移動物件Ηι於第一 蔽子區域zi或是第二遮蔽子區域Z2内的座標。 圖4A繪示為本發明一實施例的觸控演算法 跡預測示意圖。移動物件H1 一開始位於位置si, 直線運動方式往位置S3㈣。由於遮蔽物件τ ㉘、, 控制模組13G可能對第二遮蔽子區域㈡内移動物件的 201239709 ,105-TWXX 37094twf.doc/n 移動執跡造成誤判,例如認為移動物件H1的移動軌跡經 過位置S4,但實際上移動軌跡可能未經過此處。因此,在 上述定義第-遮蔽子區域Z1及第二遮蔽子區域z2之後, ^此時有移動物件H1進人第—遮蔽子區域Z1或是第二遮 1=域Z2,控制模組130會利用一觸控演算法去計算出 動物件H1的座標。於本實施例中,例如,此觸控演算 =依據此祕物件H1進人第二賴子輯z 軌跡’來預測移動物件m j的移動 移動執跡。 H1進入第-遮蔽子區域Z2之後的 Μ,:直=況為例:移動物件H1 -開始位於位置 動。因此,押制^式經過遮蔽區域Z2而往位置S3移 法可氣一^制模 提供一觸控演算法,此觸控变篡 物體加i度的° Λ移動慣性演算法利“動中 性演算法已為本S之預Γ體此時的加速度。此移動慣 模組糊斷彻。當控制 Z2時,控制模組在二==第二 衫像感測器㈣與知技藝來依照 判斷移動物件H1: 1的移動轨跡。當控制模組130 蔽子區域Z2時,控制=第一遮蔽子區域幻或是第二遮 移動物件m的移動:二:、、請改以移動慣性演算法預測 〇c/n 201239709 0105-TWXX 37〇94twf.d, 在另一實施例中,批办 算法來預測移動物件H1 控在制130可以使用另一觸控演 蔽子區域Z2内的移動^第·;遮蔽子區域Z1或是第二遮 件H1不在第-遮蔽子域制模組130判斷移動物 控制模組13G會使與第二遮蔽子區域Z2時, m的感測/拍攝結果計照影像感測器⑽與 物件H1 #移動軌跡。當的位置並紀錄移動 物件hi的移動轨 =nrz2之前的第二 二某=值:則控制模组13。所進行 = 的第二移動軌跡,其中第二移動軌跡第;^子區域〇内 ::的斜率。此直線方程式預測移動物件H二移動 蔽子區域Z2之後,經過位置S2至位置幻。 第—遮 圖4B繪示為本發明另一實施例的觸。 軌跡預測示意圖。圖4 B的曲率預 i /臾异法之移動 作類似。移動物件m —開始位於:置二致與圖4A的運 移動,途中經過遮蔽區域Z2。由於=欲往位置S7 =使第二影像_ 丨之影響, =軌跡經過位置S8,但實際上移動軌跡可,$忍為移 因此,控制模組130提供上述觸控演瞀此未,、!過此處。 切在從位置S3移動進入第二遮蔽子判斷移動物件 或Z2之前的第— 201239709BACKGROUND OF THE INVENTION 1. Field of the Invention This invention relates to a touch device, and more particularly to a method for identifying a shadow region of an optical touch device. [Prior Art] Touch screens of various electronic devices are roughly classified into resistive, capacitive, surface acoustic wave, and optical sensing methods. The optical touch screen uses an optical sensor to measure the position of the touch point, and the user only needs to touch the touch screen slightly to get the response of the optical sensor, and use the finger, the brush, the stylus The input object can be enabled. The surface of the optical touch screen is not covered by the paint layer, so it can provide clear images. In addition, when the user touches the touch screen, the scratches left will not be used for the touch screen. The advantages of the item are higher in the frequency of the touch screen; 1 brother, such as the cash machine or the touch interface of the multimedia information station, is even more important. However, the touch screen products on the market today are judged by the touch position without the cover. However, when there are no 2 pieces on the touch screen, the touch of the touch screen mechanism of the whole touch screen will be touched by the user due to the fact that the unknown object shields part of the optical sense (4). [Inventive content] 1 〇5 - Τ WXX 37094twf. doc/n 201239709 The present invention provides an optical touch device including a first, a second, and a fourth fishing. The towel device is disposed on the substrate_control area to sense the mth image sense of the mask object in the = & field, and is disposed around the substrate_control area to sense the shielding object in the touch area. The first image sensor and the second image sense 22 are determined according to the position of the first image sensing II and the first edge of the first image image, according to the first image sensor The second edge of the second image determines a second connection, and the third connection is determined according to the position of the second image sensor and the third edge of the second image, according to the second image sensor. Positioning a fourth connection with the fourth edge of the second image, and defining at least one masking region by using the first connection, the second connection, the third connection, and the fourth connection with the edge of the touch area . The present invention provides a method for masking an optical touch device. The optical touch device includes a first image sensor and a second image sensor disposed around a touch area of the substrate. The method includes: sensing, by the first image sensor, the first image of the shielding object in the touch area, and sensing, by the second image sensor, the second image of the shielding object in the touch area, according to the first image Positioning the sensor and the first edge of the first image to determine a first connection, determining a second connection according to the position of the first image sensor and the second edge of the first image, according to the second image The position of the sensor and the third edge of the second image determine a third connection' according to the position of the second image sensor and the second image of the second image 201239709 riMm-Au-0105-TWXX 37094twf.d〇c The /n four edges determine the fourth connection, and the first connection, the second connection, the second connection, and the fourth connection are used to define a region with the edge of the touch area. For example, in an embodiment of the present invention, a second movement profile of the moving object into the occlusion region is predicted by a touch algorithm according to a first-moving trajectory of the moving object before entering the occlusion region. Based on the above, the present invention provides an optical touch device and a method for shielding the same, wherein the image sensor senses an image of the object in the touch area, and determines the shadow area by using the edge of the image. After the masking area is determined, the touch algorithm is used to predict the movement of the object after entering the masking area according to the moving track before the object enters the masking area. The above described features and advantages of the present invention will become more apparent from the description of the appended claims. 1 is a functional block diagram of an optical touch device 100 according to an embodiment of the invention. FIG. 1 is a top plan view showing the layout of the optical touch device 1A shown in FIG. 1A according to an embodiment of the invention. Referring to FIG. 1 and FIG. 1 , the optical touch device 1 can include, but is not limited to, a substrate 102 , a first image sensor 110 , a second image sensor U0 , a control module 130 , and a database 140. The optical touch device 1 can be a tablet computer, a smart phone or other electronic device with an optical touch panel, such as a desktop with optical touch function. The substrate 1〇2 may be any planar substrate of rigid or soft material, such as a plastic substrate, a -U105-TWXX 37094twf.doc/n 201239709 metal substrate, and the like. Further, the substrate 102 may be a transparent or opaque substrate β. For example, if the embodiment is applied to a touch display device, the substrate 1〇2 may be a display panel or a display element. If the optical touch panel is applied to the display element, the display element can have a touch function. The display element has a display area, and the touch area 114 of the substrate 1〇2 may be a display area of the display panel. In this embodiment, the substrate 1〇2 may be a table top. In this embodiment, the touch area U4 of the substrate 102 is rectangular. In other embodiments, touch area 114 can be other geometric shapes. In detail, the periphery of the touch area 114 includes a side edge Α, a side edge Β, a side edge c, and a side edge D. In the present embodiment, the side edge Α, the side edge Β and the side edge 具有 have a line light source 112, and the line light source 112 can be composed, for example, by a plurality of point light sources arranged along the side edge Α, the side edge Β and the side edge C. The light emitted by the point source may be, for example, visible light or invisible light (eg, infrared light). The embodiment of the line source 112 is not implanted, and the line source 112 can be implemented by a light guide plate and a backlight. The control module 130 is coupled to the first image sensor 11A, the second image sensor 120, and the data $140. The first image sensor 11 and the second image sensor 120 are disposed in the compartment of the touch area 114 (in the embodiment, the end angle P1 and the end angle P4 are not limited thereto). Sensing the image from the different angles to the first image of the touch area i4 from the different angles: the first image sensor 110 and the second image sensor 12 are similar to any rhyme optical Such as cameras, digital cameras, line image sensors, and the like. The control module (10) determines, according to the position and the sensing result of the first image sensing 6 201239709 πναι-λ^-0105-TWXX 37094twf.doc/n detector 110 and the second image sensor 120, the f-shaped object T1 is optically At least one shielding area of the touch device 100, such as the first shielding sub-region Z1 and the second shielding sub-contact Z2 shown in FIG. 1B. After defining the occlusion area, the control module 130 uses the 演 control algorithm to predict the movement trajectory after the moving object enters the occlusion area. The detailed implementation steps will be described below. FIG. 2A illustrates a second embodiment of the image of the first line segment of the masking object Ti sensed by the second image sensor 110 of FIG. 1B according to an embodiment of the invention. FIG. 2B illustrates a second embodiment of the image sensor 120 of FIG. 1B that is sensed by the image sensor 120 of FIG. 1B according to an embodiment of the present invention. Please refer to FIG. 1B, FIG. 2A and FIG. 2B simultaneously. The first image sensor 110 and the second image sensor 120 respectively capture the touch area 114 at different angles. For example, the first image of the first segment image 210 shown in FIG. 2A is obtained by the first image sensing and the first region 114, and the 2B image sensing device 120 senses and captures the touch region 114 to obtain a circle. There is no second segment image 220. When the device ==\114 has the object T1, the first image is an image. The first object of the masking object T1 having the shadow-and second image-in the first segment image 210 of the first to f-eight of the object T1 of the touch region U4: the second image sensor 120 can Sense: the second image of the touch object η touch area iH. For example, the first image of the first segment image 201239709 ---------0105-TWXX 37094twf.doc/n 220 of H to = object (four) has a second image 225 of the shielding object T1, wherein the second image 225 The third edge X3 and the fourth edge χ4 are respectively the two side edges of the shield object T1 observed by the end angle P4. FIG. 3 is a schematic diagram of the control module 130 determining the first connection according to an embodiment of the invention. P1 to P3 are the end angles of the touch area 114 (as shown in FIG. 1B). Since the touch area 114 is known, the line segments P1-P3, the line segments P1-P2, and the line segments P2-P3 are all known 'and the angle α between the line segments pi-p3 and the line segments P1-P2 is also known. That is, the slope of the line segment PLP3 is known. The position Ρ2, Ρ3, Ρ4 in the first segment image 210 shown in FIG. 1 and FIG. 3 respectively correspond to the position of the end angle Ρ2, the end angle Ρ3, and the end angle Ρ4 in the touch area 114, wherein the line segment P2'~ The length of P3' (ie arc length) is known. Referring to FIG. 3, after the first image sensor 11 detects the first image 215, the ratio of the angle α to the angle β is equal to the length of the line segment Ρ2, -Ρ3 in the first segment image 21〇 and the line segment Ρ2. The ratio of the length of '-Χ1, therefore, the angle is calculated based on the angle α, the length of the line segment Ρ2'-Ρ3', and the length of the line segment P2'-Xl. That is, the control module 130 can calculate the slope of the line segment ρι_Β2. Based on the slope of the corner angle Ρ1 and the slope of the line segment Ρ1-Β2, the control module 130 can obtain the equation of the first line L1. Similarly, the control module 130 can calculate the angle between the line segment Ρ1-Χ2 and the line segment Ρ1-Ρ4 according to the angle between the line segment ρι_ρ3 and the line segment ρι_ρ4, the length of the line segment Ρ3'-Ρ4', and the length of the line segment χ2_Ρ4, that is, calculate the line segment. The slope of ρι_Χ2 is the slope of the line segment pl_cl between the position Ρ1 and the position C1. Based on the position P1 of the first image sensor 110 and the slope of the line segment ρ1_χ2, the control module 130 can determine the equation of the second line L2. The control module 130 can be based on the position Ρ 4 of the second image sensor 12 与 and the second image 225 shown in FIG. 2 以 as shown in FIG. 8 201239709 r ι^λι-λ^-0105-TWXX 37094twf.doc/n. The position of the line Ρ4-Α1 between the position Ρ4 and the position Α1 is calculated by the position of the three edges χ3, and the equation of the second line L3 is determined according to the position Ρ4 of the first shirt image sensing state 120 and the slope of the line segment Ρ4-Α1. The control module 13 can calculate the slope of the line segment ???4_61 between the position Ρ4 and the position 依据 according to the position Ρ4 of the second image sensor 120 and the fourth edge χ4 position of the second image 225, and then according to the second The position of the image sensor 12A and the slope of the line segment p4_Bi determine the equation of the fourth line L4. Next, the control module 130 can define at least the mask region by using the first connection second connection L2, the third connection L3, and the fourth connection L4 and the edge of the touch area. In this embodiment, the control mode is used. The group (10) can use the first connection u' second connection u, the fourth connection L4, and the edge of the touch area 114 to define the first-shadow sub-area ,ι, and the third connection L3, fourth. Wire [4, the first connection defines the second shadow sub-region Z2 with the edge of the touch area m. When the moving object is called, for example, a hand or a stylus pen, entering the first shadow sub-region Z1 or the second mask sub-region Z2, since the moving object m is obscured by the shielding object τι, the control module 130 cannot be accurately The coordinates of the moving object Ηι in the first mask region zi or the second mask sub-region Z2 are calculated. FIG. 4A is a schematic diagram of track prediction of a touch algorithm according to an embodiment of the invention. The moving object H1 is initially at the position si, and the linear motion is moved to the position S3 (four). Due to the shielding object τ 28 , the control module 13G may cause a misjudgment to the 201239709 , 105-TWXX 37094twf.doc/n moving track of the moving object in the second shielding sub-region (2), for example, the moving trajectory of the moving object H1 passes through the position S4. , but in fact the movement track may not pass here. Therefore, after the first-shadow sub-region Z1 and the second mask sub-region z2 are defined, ^ the moving object H1 enters the first-shadow sub-region Z1 or the second mask 1=domain Z2, and the control module 130 A touch algorithm is used to calculate the coordinates of the animal piece H1. In this embodiment, for example, the touch calculation = predicting the movement movement of the moving object m j based on the entry of the secret object H1 into the second track z track. H1 enters the 之后 after the first-shadow sub-region Z2, as follows: For example, the moving object H1 - starts to move. Therefore, the squeezing method passes through the occlusion area Z2 and moves to the position S3 to provide a touch algorithm. The touch 篡 篡 加 加 加 Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ The algorithm has been the acceleration of the pre-spinning of this S. This mobile module is completely smeared. When controlling Z2, the control module judges in the second==second shirt image sensor (4) and knowing the skill. The movement trajectory of the moving object H1: 1. When the control module 130 covers the sub-region Z2, the control = the first occlusion sub-region or the movement of the second visor moving object m: 2:, please change the movement inertia calculation The method predicts 〇c/n 201239709 0105-TWXX 37〇94twf.d. In another embodiment, the batch algorithm predicts that the moving object H1 control 130 can use the movement within the other touch-making sub-region Z2^ The sensing sub-region Z1 or the second mask H1 is not in the first-shadow sub-domain module 130 when the mobile object control module 13G determines that the second mask sub-region Z2 is the same as the second mask sub-region Z2. According to the image sensor (10) and the object H1 # move the track. When the position and record the moving object hi moving track = nrz2 22===: Control module 13. The second movement trajectory of =, wherein the second movement trajectory; ^ sub-area ::: slope of this line equation predicts moving object H After Z2, passing the position S2 to the positional illusion. The first-mask 4B is a schematic diagram of the trajectory prediction according to another embodiment of the present invention. The curvature of the pre-i/singular method of Figure 4B is similar. Moving the object m - Start at: set and move with Figure 4A, passing through the masked area Z2. Since ==destination S7 = influence of the second image _ ,, = trajectory passes through position S8, but actually moves the trajectory, $ Therefore, the control module 130 provides the above-mentioned touch interpretation, which is not here, and is here. Cut before moving from the position S3 into the second mask to determine the moving object or Z2 - 201239709

-v>105-TWXX 37094twf.doc/n ,動轨跡的曲率。若第一移動轨跡的曲率大於或等於所述 臨界值,則此觸控演算法會套用一拋物線方程式(或圓方程 式、弧方程式等)於移動物件H1進人第二遮蔽子區域Z2 内的第一移動轨跡。此拋物線方彳以預測移動物件HI進 入第二遮蔽子區域Z2之後,經過位置S6至位置S7 ^ 上述移動執跡可為移動物件H1的座標變化,而關於 臨界值、直線方程式、拋物線絲式,可由光學式觸控裝 置100其中之資料庫140提供上述資訊給控制模組13〇實 施此觸控演算法。 圖5繪示為本發明一實施例的一種光學式觸控裝置之 遮蔽辨識方法之流程圖,請參照圖5。第一影像威測5|烕 測遮蔽物件於觸控區域之第一影像(步驟5〇2),第影像^ 測器感測遮蔽物件於觸控區域之第二影像(步驟S5〇4)。接 著,依據第一影像感測器的位置與第一影像的第一邊緣決 疋第一連線(步驟S506),依據第一影像感測器的位置與第 一影像的第二邊緣決定第二連線(步驟S5〇8),依據第二影 像感測器的位置與第二影像的第三邊緣決定第三連線(步 S510) ’依據第二影像感測器的位置與第二影像的第四邊緣 決疋第四連線(步驟S512)’以及利用第一連線、第二連線、 第三連線、第四連線與觸控區域的邊緣定義至少一遮蔽區 域(步驟S514)。利用觸控演算法,其中觸控演算法可為一 移動慣性演算法,依據一移動物件在進入此遮蔽區域之前 的第一移動軌跡,預測此移動物件進入此遮蔽區域中的第 12 201239709 -0105-TWXX 37〇94twf.doc/n 匕轨跡(步驟S516),上述移動軌跡可為移動物件的座 —而在本發明的光學式觸控裝置的遮蔽辨識方法於另— ^包括以下步驟(未繪示):所述之遮蔽區域可 匕3第一遮蔽子區域與第二遮蔽子區域,遮蔽辨 連線、此第二連線、此第四連線與觸控區域 的邊緣疋義第-遮蔽子區域,以及利用此第三連線 四連線、此第-連線與觸控區域的邊緣定義第二遮蔽 域。而上述觸控演算法更包括第一移動軌跡的曲率: 一臨界值,縣用—直線絲式於第二移純跡,以及第 =動,的曲率若大於或等於此臨界值, f 線方程式於第二移動軌跡,上述之臨界值、直線方程$ 及拋物線方程式可由一資料庫提供。 弋 人實施例可以硬體方式實現之。所屬領域之技術 )貝可^照上述諸實施例之朗,以電腦程式的 現之,並·電腦可讀取儲存媒體儲存此__電腦程式大以 ,,光學式觸控裝置100的 ίΓΓ=部分步驟以硬體實現,其他步二 :L 協同運作來實現所述光學式觸控裝 置100的遮蔽辨識方法。 Μ工衣 綜上所述,本發明提供一種 所=r:;影響;下,利用光ί丄= 預、、移動物I在義遮敝區域’並依據觸控演算法 制移動物件在遮蔽區域中的移動軌跡,本發明可應用於 13 201239709 -O105-TWXX 37094twf.doc/n 許多觸控產品上,包括手機、平板電腦、數位相框、觸控 螢幕與互動式桌面等。當光學式觸控裝置擁有遮蔽辨識能 力之後,使用者可更自然地進行觸控❶即使置放遮蔽物件 於螢幕上,例如將杯子放置於互動式桌面的觸控區域ιΐ4 内,亦無需擔心是否會影響觸控效果,使用者可輕鬆摂 光學式觸控裝置。 A $ 雖然本發明已以實施例揭露如上,然其並非用以限定 本發明,任何所屬技術領域中具有通常知識者,在不^離 本發明之精神和範圍内,當可作些許之更動與潤飾,故本 發明之保護範圍當視後附之申請專利範圍所界定者為準。 【圖式簡單說明】 圖1A繪示為本發明一實施例的一種光學式觸控 之功能方塊示意圖。 、 圖1B繪示為依據本發明實施例說明圖1A所示 觸控裴置100之佈局俯視示意圖。 予二 圖2A繪示為本發明一實施例的第一線段影像之示专 圖2B繪示為本發明一實施例的第二線段影像 土 圖。 、不忍 圖3繪示為本發明一實施例之控制模組13〇決 連線之示意圖。 、弟 之移動執 圖4A繪示為本發明一實施例的觸控演算法 跡預測示意圖。 201239709 105-TWXX 37094twf.doc/n 圖4B繪示為本發明另一實施例的觸控演 軌跡預測示意圖。 、'移動 圖5繪示為本發明—實施例的一種光學式觸控裝置 遮蔽辨識方法之流程圖。 【主要元件符號說明】 100 光學式觸控裝置 102 基板 110 第一影像感測器 112 線光源 114 觸控區域 120 第二影像感測器 130 控制模組 140 資料庫 210 第一區段影像 220 第二區段影像 215 第一影像 225 第二影像 A、B、C、D:觸控區域的側緣 Al、Bl、B2、C1 :第一影像及第二影像的邊緣在觸 控區域的側緣所對應的位置 H1 :移動物件 L1〜L4:第一連線〜第四連線 P1〜P4 :觸控區域的端角 15 201239709-v>105-TWXX 37094twf.doc/n, the curvature of the moving track. If the curvature of the first movement trajectory is greater than or equal to the threshold value, the touch algorithm applies a parabolic equation (or a circular equation, an arc equation, etc.) to the moving object H1 into the second shadow sub-region Z2. The first movement trajectory. The parabolic square 彳 after predicting that the moving object HI enters the second occlusion sub-region Z2, passes the position S6 to the position S7 ^ The above-mentioned movement trajectory may be a coordinate change of the moving object H1, and regarding the critical value, the linear equation, the parabolic filament, The information can be provided to the control module 13 by the database 140 of the optical touch device 100 to implement the touch algorithm. FIG. 5 is a flow chart of a method for masking and identifying an optical touch device according to an embodiment of the invention. Please refer to FIG. 5. The first image is measured by the first image (step 5〇2), and the image sensor senses the second image of the object in the touch area (step S5〇4). Then, the first connection is determined according to the position of the first image sensor and the first edge of the first image (step S506), and the second edge is determined according to the position of the first image sensor and the second edge of the first image. Connecting (step S5〇8), determining a third connection according to the position of the second image sensor and the third edge of the second image (step S510) 'based on the position of the second image sensor and the second image The fourth edge determines a fourth connection (step S512)' and defines at least one occlusion area by using the first connection, the second connection, the third connection, the fourth connection, and the edge of the touch area (step S514) . The touch algorithm is used, wherein the touch algorithm can be a mobile inertia algorithm, according to a first moving track before a moving object enters the masking area, predicting that the moving object enters the shaded area on the 12th 201239709 - 0105 - TWXX 37〇94 twf.doc/n 匕 trajectory (step S516), the moving trajectory may be a seat of the moving object - and the occlusion identification method of the optical touch device of the present invention includes the following steps (not The masking area is 匕3, the first shielding sub-area and the second shielding sub-area, the shielding connection line, the second connection line, the fourth connection line, and the edge of the touch area- Masking the sub-area, and defining a second occlusion field by using the third connection, the first connection, and the edge of the touch area. The touch algorithm further includes the curvature of the first moving trajectory: a critical value, the county-linear curve is used for the second moving pure trace, and the curvature of the second motion is greater than or equal to the critical value, the f-line equation For the second movement trajectory, the above-mentioned critical value, linear equation $ and parabolic equation can be provided by a database. The human embodiment can be implemented in a hardware manner. According to the technology in the art, according to the above embodiments, the current computer program, and the computer readable storage medium store the __ computer program, the optical touch device 100 ΓΓ ΓΓ The partial steps are implemented by hardware, and the other step 2: L cooperatively operates to implement the mask identification method of the optical touch device 100. In summary, the present invention provides a =r:; influence; under, using light 丄 预 = pre-, moving object I in the concealed area ' and moving objects in the shadow area according to the touch algorithm The moving track, the present invention can be applied to 13 201239709 -O105-TWXX 37094twf.doc/n Many touch products, including mobile phones, tablets, digital photo frames, touch screens and interactive desktops. When the optical touch device has the occlusion recognition capability, the user can perform the touch more naturally. Even if the shielding object is placed on the screen, for example, the cup is placed in the touch area ι 4 of the interactive desktop, there is no need to worry about whether It will affect the touch effect, and the user can easily pick up the optical touch device. The present invention has been disclosed in the above embodiments, but it is not intended to limit the invention, and any one of ordinary skill in the art can make some modifications and changes without departing from the spirit and scope of the invention. The scope of protection of the present invention is defined by the scope of the appended patent application. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a schematic diagram of a functional block of an optical touch according to an embodiment of the invention. FIG. 1B is a schematic top plan view showing the layout of the touch control device 100 of FIG. 1A according to an embodiment of the invention. 2A is a view showing a first line segment image according to an embodiment of the present invention. FIG. 2B is a view showing a second line segment image according to an embodiment of the present invention. FIG. 3 is a schematic diagram showing the connection of the control module 13 according to an embodiment of the present invention. FIG. 4A is a schematic diagram of a touch algorithm prediction of an embodiment of the present invention. 201239709 105-TWXX 37094twf.doc/n FIG. 4B is a schematic diagram of a touch trajectory prediction according to another embodiment of the present invention. FIG. 5 is a flow chart of a method for masking and recognizing an optical touch device according to an embodiment of the present invention. [Main component symbol description] 100 optical touch device 102 substrate 110 first image sensor 112 line light source 114 touch area 120 second image sensor 130 control module 140 database 210 first segment image 220 Two-segment image 215 First image 225 Second image A, B, C, D: side edges of the touch area, Al, B1, B2, C1: the edges of the first image and the second image are on the side edge of the touch area Corresponding position H1: moving object L1~L4: first connection to fourth connection P1~P4: end angle of touch area 15 201239709

-U105-TWXX 37094twf.doc/n ΡΓ〜P4’ :影像内的位置 S1〜S8 :觸控區域内的位置 S502〜S516 :步驟 T1 :遮蔽物件 XI〜X4:第一邊緣〜第四邊緣 Z1 :第一遮蔽子區域 Z2 :第二遮蔽子區域 16-U105-TWXX 37094twf.doc/n ΡΓ~P4': Positions in the image S1~S8: Positions S502~S516 in the touch area: Step T1: Masking objects XI~X4: First edge to fourth edge Z1: First shadow sub-region Z2: second shadow sub-region 16

Claims (1)

201239709 r ιν/λα-/\^-0 105-TWXX 37094twf.doc/n 七、申請專利範圍: 1. 一種光學式觸控裝置,包括: -第-影像感測器,設置於-基板的—觸控區域的周 圍,用以感測一遮蔽物件於該觸控區域之一第一影像. 第一衫像感測态,設置於該觸控區域的周圍,用以 感測該遮蔽物件於該觸控區域之一第二影像;以及 …-控制模組該第^影像感測器與該第二影像感 測器,用以依據該第-影像感測H的位置與該第—影像的 -第-邊緣決第-連線,依據該第—影像感測器的位 置與β亥第衫像的—第二邊緣決定—第二連線,依據該第 二影像感測H的位置與該第二影像H邊緣決定一第 f連線,依據該第二影像感測器的位置與該第二影像的一 第四邊緣決定—第四連線,錢利肋第-連線、該第二 連線m線、該第四連線與觸控㈣的邊緣定義 至少一遮蔽區域。 2·如申請專利範圍第1項所述之光學式觸控裝置,其 中所述遮蔽區域包含_第—遮蔽子區域與一第 二遮蔽子區 $ ’違控制模組_該第—連線、該第二連線、該第四連 線與4觸控區域的邊緣定義該第___遮蔽子區域,以及利用 該第二連線、該第四魏、該第-連線與該觸控區域的邊 緣定義該第二遮蔽子區域。 3.如争請專利範圍第丨項所述之絲式觸控裝置,其 中f控制模Μ更包括—觸控演算法,該觸控演算法依據-件在進人该遮蔽區域之前的一第—移動軌跡,預測 17 201239709 ---------0105-TWXX 37094twf.d〇c/n 該移動物件進入該遮蔽區域中的一第二移動軌跡。 4·如申請專利範圍第3項所述之光學式觸控裝置,其 中該觸控演算法包括一移動慣性演算法。 5.如申請專利範圍第3項所述之光學式觸控裝置,其 中該觸控演算法包括: 若該第一移動執跡的曲率若小於一臨界值,則套用一 直線方程式於該第二移動軌跡;以及 右δ亥第一移動執跡的曲率若大於或等於該臨界值,則 套用一拋物線方程式於該第二移動軌跡。 6_如申請專利範圍第5項所述之光學式觸控裝置,更 包括: 一資料庫’耦接該控制模組,用以提供該臨界值、該 直線方程式及該拋物線方程式至該控制模組,以預測該第 二移動執跡。 7·如申請專利範圍第1項所述之光學式觸控裝置,其 中該第一影像感測器與該第二影像感測器分別設置於該觸 控區域的相鄰二個端角。 8· —種光學式觸控裝置的遮蔽辨識方法,該光學式觸 控裝置包括配置於一基板的一觸控區域周圍的一第一影像 感測器與一第二影像感測器,所述遮蔽辨識方法包括: 經由該第一影像感測器感測一遮蔽物件於該觸控區 域之一第一影像; 經由該第二影像感測器感測該遮蔽物件於該觸控區 域之一第二影像; 二 201239709 -U105-TWXX 37094twf.doc/n 依據該第一影像感測器的位置與該第一影像的_ 一邊緣決定一第一連線; 第 依據該第一影像感測器的位置與該第一影像的_ 二邊緣決定一第二連線; 第 依據該第二影像感測器的位置與該第二影像的― 三邊緣決定一第三連線; 依據該第二影像感測器的位置與該第二影像的—第 四邊緣決定一第四連線;以及 利用該第一連線、該第二連線、該第三連線、該第 連線與該觸控區域的邊緣定義至少一遮蔽區域。 9. 如申請專利範圍第8項所述之光學式觸控裝置的 遮蔽辨識方法,其中所述遮蔽區域包含一第一遮蔽子區域 與一第二遮蔽子區域,所述遮蔽辨識方法更包括: 一 利用該第一連線、該第二連線、該第四連線與該觸押 區域的邊緣定義該第一遮蔽子區域;以及 $ 利用該第三連線、該第四連線、該第一連線與該觸和 區域的邊緣定義該第二遮蔽子區域。 10. 如申請專利範圍第8項所述之光學式觸控裝置的 遮蔽辨識方法,更包括: ^ 利用一觸控演算法,依據一移動物件在進入該 遮蔽區 域之前的一第一移動執跡,預測該移動物件進入該遮威 域中的一第二移動軌跡。 .广展襄 11. 如申請專利範圍第10項所述之光學式觸^演算 的遮蔽辨識方法,其中該觸控演算法包括一耖動慣,/ 19 201239709 .105-TWXX 37094twf.doc/n 法。 12.如申請專利範圍第10項所述之光學式觸控裝置 的遮蔽辨識方法,其中該觸控演算法包括: 若該第一移動執跡的曲率若小於一臨界值,則套用一 直線方程式於該第二移動執跡;以及 若該第一移動軌跡的曲率若大於或等於該臨界值,則 套用一拋物線方程式於該第二移動執跡。201239709 r ιν/λα-/\^-0 105-TWXX 37094twf.doc/n VII. Patent application scope: 1. An optical touch device comprising: - a first-image sensor, disposed on the substrate - The first area of the touch area is configured to sense a first image of the object in the touch area. The first image sensing state is disposed around the touch area for sensing the shielding object. a second image of the touch area; and the control image of the second image sensor and the second image sensor for sensing the position of the first image and the image of the first image a first-edge determination-connection, according to the position of the first image sensor and the second edge of the β-Hide shirt image--the second connection, sensing the position of the H according to the second image and the first The second image H edge determines an f-th line, and is determined according to the position of the second image sensor and a fourth edge of the second image - the fourth connection, the Qianli rib-connection, the second connection The line m line, the fourth line, and the edge of the touch (4) define at least one masking area. The optical touch device of claim 1, wherein the occlusion region comprises a _first-shadow sub-region and a second occlusion sub-region </ </ RTI> The second connection, the edge of the fourth connection and the 4 touch area define the first ___shadow sub-area, and the second connection, the fourth Wei, the first connection, and the touch The edge of the region defines the second shadow sub-region. 3. The wire touch device as claimed in claim 3, wherein the f control module further comprises a touch algorithm, the touch algorithm is based on a piece before the person enters the masking area - Moving trajectory, prediction 17 201239709 ---------0105-TWXX 37094twf.d〇c/n The moving object enters a second movement trajectory in the occlusion area. 4. The optical touch device of claim 3, wherein the touch algorithm comprises a mobile inertia algorithm. 5. The optical touch device of claim 3, wherein the touch algorithm comprises: if the curvature of the first movement track is less than a threshold, applying a linear equation to the second movement If the curvature of the first trajectory of the right δ hai is greater than or equal to the critical value, a parabolic equation is applied to the second trajectory. The optical touch device of claim 5, further comprising: a database coupled to the control module for providing the threshold, the linear equation and the parabolic equation to the control module Group to predict the second move. The optical touch device of claim 1, wherein the first image sensor and the second image sensor are respectively disposed at adjacent two end angles of the touch control region. A method for occlusion of an optical touch device, the optical touch device comprising a first image sensor and a second image sensor disposed around a touch area of a substrate, The masking method includes: sensing, by the first image sensor, a first image of the object in the touch area; and sensing, by the second image sensor, the object in the touch area Second image; two 201239709 -U105-TWXX 37094twf.doc/n according to the position of the first image sensor and the edge of the first image determines a first connection; according to the first image sensor Positioning a second connection with the _2 edge of the first image; determining a third connection according to the position of the second image sensor and the "three edges" of the second image; The position of the detector and the fourth edge of the second image determine a fourth connection; and the first connection, the second connection, the third connection, the connection, and the touch area The edge defines at least one masked area. 9. The occlusion identification method of the optical touch device of claim 8, wherein the occlusion region comprises a first occlusion sub-region and a second occlusion sub-region, and the occlusion identification method further comprises: Defining the first shadow sub-region by using the first connection, the second connection, the fourth connection, and an edge of the contact area; and using the third connection, the fourth connection, the The first line and the edge of the touch area define the second shadow sub-area. 10. The method for masking the optical touch device of claim 8, further comprising: using a touch algorithm to perform a first movement of the moving object before entering the masking area. Predicting that the moving object enters a second movement trajectory in the occlusion domain.广展襄11. The method for masking the optical touch calculus described in claim 10, wherein the touch algorithm includes a singularity, / 19 201239709 .105-TWXX 37094twf.doc/n law. 12. The method for masking an optical touch device according to claim 10, wherein the touch algorithm comprises: if the curvature of the first movement track is less than a critical value, applying a linear equation The second movement track; and if the curvature of the first movement track is greater than or equal to the threshold, applying a parabolic equation to the second movement.
TW100110162A 2011-03-24 2011-03-24 Optical touch device and shelter identification method thereof TW201239709A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100110162A TW201239709A (en) 2011-03-24 2011-03-24 Optical touch device and shelter identification method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100110162A TW201239709A (en) 2011-03-24 2011-03-24 Optical touch device and shelter identification method thereof

Publications (1)

Publication Number Publication Date
TW201239709A true TW201239709A (en) 2012-10-01

Family

ID=47599593

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100110162A TW201239709A (en) 2011-03-24 2011-03-24 Optical touch device and shelter identification method thereof

Country Status (1)

Country Link
TW (1) TW201239709A (en)

Similar Documents

Publication Publication Date Title
US20200257373A1 (en) Terminal and method for controlling the same based on spatial interaction
KR102194272B1 (en) Enhancing touch inputs with gestures
CN105229582B (en) Gesture detection based on proximity sensor and image sensor
US9262016B2 (en) Gesture recognition method and interactive input system employing same
US20140237408A1 (en) Interpretation of pressure based gesture
US9207852B1 (en) Input mechanisms for electronic devices
US20140237422A1 (en) Interpretation of pressure based gesture
US20120274550A1 (en) Gesture mapping for display device
US20140237401A1 (en) Interpretation of a gesture on a touch sensing device
US20140191998A1 (en) Non-contact control method of electronic apparatus
US20140354595A1 (en) Touch input interpretation
US9639167B2 (en) Control method of electronic apparatus having non-contact gesture sensitive region
Sahami Shirazi et al. Exploiting thermal reflection for interactive systems
US20110298708A1 (en) Virtual Touch Interface
KR101208783B1 (en) Wireless communication device and split touch sensitive user input surface
US20100229090A1 (en) Systems and Methods for Interacting With Touch Displays Using Single-Touch and Multi-Touch Gestures
US20070116333A1 (en) Detection of multiple targets on a plane of interest
US20130257734A1 (en) Use of a sensor to enable touch and type modes for hands of a user via a keyboard
KR20110038120A (en) Multi-touch touchscreen incorporating pen tracking
CN105579929A (en) Gesture based human computer interaction
Kurz Thermal touch: Thermography-enabled everywhere touch interfaces for mobile augmented reality applications
US20130106792A1 (en) System and method for enabling multi-display input
WO2012171116A1 (en) Visual feedback by identifying anatomical features of a hand
CN108563389B (en) Display device and user interface display method thereof
US20120120029A1 (en) Display to determine gestures