TW201015402A - Display apparatus for processing touch events - Google Patents

Display apparatus for processing touch events Download PDF

Info

Publication number
TW201015402A
TW201015402A TW98130594A TW98130594A TW201015402A TW 201015402 A TW201015402 A TW 201015402A TW 98130594 A TW98130594 A TW 98130594A TW 98130594 A TW98130594 A TW 98130594A TW 201015402 A TW201015402 A TW 201015402A
Authority
TW
Taiwan
Prior art keywords
contact
groups
movement
subset
predetermined
Prior art date
Application number
TW98130594A
Other languages
Chinese (zh)
Inventor
Gerrit Hollemans
Original Assignee
Koninkl Philips Electronics Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninkl Philips Electronics Nv filed Critical Koninkl Philips Electronics Nv
Publication of TW201015402A publication Critical patent/TW201015402A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a method, a display apparatus (100) and an electronic device (400) for processing touch events. The display apparatus (100) comprises a touch screen device (106) and a controller (101). The touch screen device (106) receives touch events as a result of objects touching the touch screen device (106) at respective positions. The objects belong to multiple groups of objects that belong together. The controller (101) is constructed for detecting movement of positions of the touch events on the touch screen device (106). The controller (101) is further constructed for determining whether the movement of a subset of positions belongs to a pre-defined gesture. The pre-defined gesture indicates that the objects associated with the subset of positions belong to one of the multiple groups. The touch events for which is determined that the movement of the subset of positions belongs to the pre-defined gesture are assigned by the controller (101) to one of the multiple groups.

Description

201015402 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種包括一接觸式螢幕器件及一控制器之 顯示裝置,且本發明係關於一種包括該顯示裝置之電子器 件及一種處理接觸事件之方法。 【先前技術】 美國公開專利申請案第us 2007/0262964A1號揭示一種 接收多個接觸輸入之多個輸入接觸式螢幕。該多個接觸輸 入經處理以發現接觸輸入特徵與預定之多個輸入姿態特徵 之間之一匹配。隨後,—電腦系統經觸發以回應於所識別 之多個輸入姿態而啓動—特定動作。此多個輸入接觸式勞 幕之-限制係無法區分一使用者之兩不同手或不同使用者 之姿態之接觸輸入。因此,此多個輸入接觸接觸式勞幕並 非適於為多於一個手或多於一人所使用。 【發明内容】 本發月之-目標係提供能夠區分屬於不同組群之接觸事 件的一種顯示装置。 _本發明之—第—態樣提供如技術方案1所主張之-種顯 j置1本發明之一第二態樣提供如技術方案所主張 ΐ主器件。本發明之一第三態樣提供如技術方案11 中。'之一種方法。有利之實施例被界定於附屬技術方案 幕艮據本發明之第—態樣的顯示裝置包括-接觸式螢 幕器件及-控制器。該接觸式螢幕器件接收由於物件接觸 J 42603.doc 201015402 該接觸式螢幕器件所致的多個接觸事件。該等物件於各自 位置接觸該接觸式螢幕器件。該等物件被分組為歸屬相同 之物件的多個組群。控制器偵測該等接觸事件之位置的移 該控制器決定該等位置之一子集的移動是否屬於一預 姿〜、預疋姿態指示與該等位置之子集相關之物件屬於 多個組群之一者。若該等位置之子集的移動屬於預定姿 態,則控制器指派經決定其位置之子集之移動等於或類似 ❹ 於該預定姿態之接觸事件至多個組群之相關者。 右多個組群之不同物件接觸該接觸式螢幕器件且若該 等▲群之者的物件係根據預定姿態而被移動至接觸式螢 幕器件上,則顯不裝置決定經偵測之移動是否屬於預定姿 態。由於此決定所致,_示裝置使接觸事件(預定姿態經 偵測用於該等接觸事件)相關於多個組群之相關者。因為 接觸事件之完整集係藉由移動在接觸式榮幕器件上根據預 定姿態移動多個組群之一者的物件而再分為與由於多個組 • 群之一者之物件之移動而偵測之此組群相關的一子集(及 再分為與全部其他組群相關之一子集),故此解決當前最 先進技術之問題。因此,顯示裝置能夠區分多個組群之相 關者中的接觸事件與其他接觸事件。該顯示裝置係藉由執 行不同組群之物件的一預定姿態而能夠區分多個組群之一 者之物件的接觸事件與其他組群之接觸事件。預定姿態可 就不:組群而不同,或可就物件之每_組群(或就料組 群之一子集)而相同,並可依序或同時執行。 例如右一人將其雙手之十個手指放於顯示器件上並就 142603-doc 201015402 其左手(及右手)做一旋轉移動,則有可能自該顯示器件上 之手指位置變化推論該等手指係屬於雙手中之哪一手。 根據本發明之第二態樣的電子器件及根據本發明之第三 態樣的方法提供給使用者與相對於屬於不同組群之間之接 觸事件之區分的顯示裝置相同之優點。 在一實施例中,預定姿態係以下姿態之一:繞一單一點 旋轉多個組群之一者之物件,轉移多個組群之一者之物 件,朝向一單一點移動多個組群之一者之物件或遠離一單 一點移動多個組群之-者之物件。該等姿態係直觀之姿馨 態。單一點可係接觸式螢幕器件上之單一位置。或者,該 單一點可係緊鄰之多個位置。#該等物件開始於一單一共 同點,則足以使該等物件之位置在該姿態期間彼此遠離地 移動。若物件終止於-單-點,則足以使該等物件之位置 在該姿態期間彼此朝向地移動。旋轉多個組群之一者之物 件的姿態及轉移多個組群之一者之物件的姿態係直觀的, 因為該多個組群之-者之物件遵循相同類型移動。在旋轉 姿態中,多個組群之一者之物件遵循一假想圓周之一部 _ 分,且全部物件所遵循之全部圓周共用一單一點。在轉移 姿態中,多個組群之一者之全部物件遵循相同方向之一假 — 想線的-部分。該等線相互平行且全部物件遵循等距離之 線。該等物件至單一點之移動或遠離單一點之移動的姿態 係直觀的(因各自姿態之終止位置及開始位置係單一點或 係緊鄰該單一點)。此緊鄰指示該等物件歸屬相同。 應注意,在另一實施例中,對於位置之子集的移動是否 142603.doc -6 - 201015402 屬於預定姿態的決定可涉及—交 合限機制。在旋轉姿態之情 形下,容限可係:物件未完全遵μ @ 不疋全遵循圓周之一部分,或圓周 之中心不完全重疊於單一點。在 甘锝移姿態的情形下,容限 可係:該等物件所遵循之線+ & 喂禾70全彼此平行,及該等線遵 • 循於略微不等距離。在朝向杏请魅 Μ 别還離—單一點之組群之物件 之姿態的情形下,容限可你·从乂斗± • j係.物件朝向或遠離定位於彼此 之一允許度距離内的多個點。 • 在-進-步實施例中,被分組為多個組的物件係多隻手 的手指,且預定姿態係手之一移動。在此實施例中,多隹 手之手指接觸該接觸式勞幕器件。多隻手之手指在該接觸 式螢幕器件之上移動。多隹手 夕又手之一者可施加預定姿態於該 从 裝置偵測手指之子集的移動是否根 據預定姿態’並指派與該等手指之子集相關之接觸事件至 多個組群之相關者(其中—組群係由多隻手之一者組成卜 因此’根據此實施例之顯示裝置能夠接收來自多隻手的201015402 VI. Description of the Invention: [Technical Field] The present invention relates to a display device including a touch screen device and a controller, and the present invention relates to an electronic device including the display device and a process contact event The method. [Prior Art] U.S. Patent Application Publication No. 2007/0262964 A1 discloses a plurality of input contact screens that receive a plurality of contact inputs. The plurality of contact inputs are processed to find that one of the contact input features matches a predetermined plurality of input pose features. Subsequently, the computer system is triggered to initiate a particular action in response to the identified plurality of input gestures. The limitation of the plurality of input contact screens cannot distinguish the contact inputs of the gestures of two different hands of a user or different users. Therefore, the plurality of input contact contact screens are not suitable for use by more than one hand or more than one person. SUMMARY OF THE INVENTION The present month-target provides a display device capable of distinguishing contact events belonging to different groups. - The first aspect of the present invention provides a second aspect of the present invention as claimed in the first aspect of the present invention. A third aspect of the present invention is provided as in claim 11. 'One way. Advantageous embodiments are defined in the accompanying claims. The display device according to the first aspect of the invention comprises a contact screen device and a controller. The touch screen device receives a plurality of contact events due to object contact J 42603.doc 201015402 the touch screen device. The objects contact the touch screen device at respective locations. The objects are grouped into groups that belong to the same object. The controller detects the position of the contact events, and the controller determines whether the movement of a subset of the positions belongs to a pre-pose, and the pre-attempt gesture indicates that the object associated with the subset of the positions belongs to multiple groups One of them. If the movement of the subset of the locations belongs to a predetermined pose, the controller assigns a correlation event to the plurality of groups of the contact events whose subset of motions is determined to be equal to or similar to the predetermined pose. If different objects of the right plurality of groups contact the touch screen device and if the objects of the group are moved to the touch screen device according to the predetermined posture, the display device determines whether the detected movement belongs to Scheduled gesture. As a result of this decision, the device causes the contact events (the predetermined poses are detected for the contact events) to be related to the plurality of groups. Because the complete set of contact events is subdivided into movements due to the movement of one of the plurality of groups/groups by moving the object on one of the plurality of groups according to the predetermined posture on the contact glory device. A subset of this group related (and subdivided into a subset related to all other groups) is measured, thus solving the problem of current state of the art technology. Therefore, the display device is capable of distinguishing between contact events and other contact events in the related groups of the plurality of groups. The display device is capable of distinguishing contact events of objects of one of the plurality of groups from contact events of other groups by performing a predetermined posture of the objects of the different groups. The predetermined pose may be different from group to group, or may be the same for each group of objects (or a subset of the group), and may be performed sequentially or simultaneously. For example, if the right person puts ten fingers of his hands on the display device and makes a rotational movement of the left hand (and the right hand) of the 142603-doc 201015402, it is possible to infer the finger systems from the change of the finger position on the display device. Which of the two hands is the hand. The electronic device according to the second aspect of the present invention and the method according to the third aspect of the present invention provide the user with the same advantages as the display device with respect to the distinction between the contact events belonging to different groups. In an embodiment, the predetermined posture is one of the following postures: rotating the object of one of the plurality of groups around a single point, transferring the object of one of the plurality of groups, moving the plurality of groups toward a single point An object or an object that moves away from a single point to a plurality of groups. These gestures are intuitive and succinct. A single point can be a single location on a touch screen device. Alternatively, the single point can be in multiple locations in close proximity. The beginning of the objects at a single point of coherence is sufficient to cause the positions of the objects to move away from each other during the posture. If the object terminates at a single-point, it is sufficient to cause the positions of the objects to move toward each other during the posture. The pose of the object that rotates one of the plurality of groups and the pose of the object that transfers one of the plurality of groups are intuitive, since the objects of the plurality of groups follow the same type of movement. In the rotating attitude, an object of one of the plurality of groups follows one of the imaginary circumferences, and all the circumferences followed by all the objects share a single point. In the transfer pose, all objects of one of the multiple groups follow one of the same directions - the part of the line. The lines are parallel to each other and all objects follow an equidistant line. The attitude of the objects moving to a single point or moving away from a single point is intuitive (since the end position and starting position of the respective poses are single points or close to the single point). This immediately indicates that the objects belong to the same. It should be noted that in another embodiment, the decision as to whether the movement of the subset of locations is 142603.doc -6 - 201015402 belongs to a predetermined pose may involve an engagement limit mechanism. In the case of a rotating attitude, the tolerance can be: the object does not completely follow the @ @不疋 all follow one part of the circumference, or the center of the circumference does not completely overlap with a single point. In the case of a Ganzi shifting posture, the tolerance may be such that the lines followed by the objects + & feeds 70 are all parallel to each other, and the lines follow a slight unequal distance. In the case of facing the apricot, please leave the position of the object of a single point group, the tolerance can be from the ± ± ± j system. Objects are oriented toward or away from each other within a tolerance distance Multiple points. • In the in-step embodiment, objects grouped into groups are fingers of multiple hands, and one of the predetermined gestures moves. In this embodiment, the fingers of the hand are in contact with the contact screen device. The fingers of the plurality of hands move over the touch screen device. And one of the hands can apply a predetermined gesture to the slave device to detect whether the movement of the subset of the fingers is based on the predetermined gesture and assign a contact event associated with the subset of the fingers to the plurality of groups (where - the group is composed of one of a plurality of hands, so that the display device according to this embodiment can receive from a plurality of hands

魯 入’且該顯示裝置能翱F八文隹J 攀 直此夠G分多隻手。應注意,-使用者可 使用母隻手的兩個、- 螢幕器件。 二個、四個或五個手指接觸該接觸式 在另f施例中’被分组為多個組群之物件係多個 者之手指,且其中預宏迭能及更用 預疋姿態係一個使用者之一隻手或 手之一移動◎多個使用 一 接觸式螢幕器件。一隹丰夕觸該 式替蓋哭I , X手之手指可施加預定姿態於該接觸 榮幕15件。顯示裝置能夠區分不同使用者(因為可決定 位置之子集之移動屬於預定姿態的接觸事件係與多個使用 142603-doc 201015402 者之一者加以相關)。應注意,一使用者可使用每手之兩 個、三個、四個或五個手指接觸該接觸式螢幕器件。這此 實施例中’使用者利用一隻手施加預定姿態於該接觸式螢 幕器件。在另一種實施例中,預定姿態係由使用者利用兩 隻手加以施加。 在一進一步實施例中,控制器係經進一步建構用於決定 多個組群之相關者之接觸事件之位置之子集的移動是否屬 於用於啟動一隨後動作之一進一步預定姿態。當多個組群 之一者之物件開始接觸該接觸式螢幕器件時,該等物件須 要識別自身之預定姿態。識別被用以決定哪些接觸事件之 子集形成一組群。隨後’該組群之物件施加作為用於顯示 裝置之一命令的進一步預定姿態以啟動隨後動作。因為多 個組群之一者之物件須要在顯示裝置能夠決定該進—步預 定姿態之前識別其等自身,故該顯示裝置將不混合屬於不 同組群之接觸事件。進一步預定姿態將不導致基於屬於不 同組群之接觸事件之進一步預定姿態的一不期望之決定。 因此,當不同組群之物件施加進一步預定姿態以執行各自 命7時可啓用-顯不裝置。隨後動作可由控制器或在該顯 不裝置係一電子器件之部分的情形下由該電子器件之另一 部分加以執行。隨後動作之啟動係可藉由發送一觸發信號 至該電子器件之其他部分而實現。隨後動作之實例正改變 接:式螢幕器件上所顯示之一影像,或正改變相對於該電 子益件之其他功能(類似例如開始播放音樂)的另一動作。 在實施例中,接觸式螢幕器件顯示-影像,且控制器 142603.doc 201015402 產生影像以使得一回饋指示係被顯示於接觸事件之位置。 回饋改良使用者之體驗(因其使得使用者意識到該顯示裝 置之行為)。當使用者在一特定位置接觸該接觸式螢幕器 件之後,顯示裝置指派一位置至偵測到該接觸式勞幕器件 上之物件之接觸的位置。顯示裝置所指派之位置明顯地不 . 完全相同於使用者所假定之位置。經假設之位置與經指派 之位置之間的差異可係(例如)視差之結果。使用者之假定 Φ 與顯示裝置之行為之間的另一差異可係經偵測接觸事件之 數量。例如在偵測接觸之前’在一使用者須利用一物件並 以一最小壓力接觸該接觸式螢幕器件的情形下,反饋顯示 給該使用者物件是否以足夠之壓力接觸該接觸式螢幕器 件。 在另一實施例中,接觸式螢幕器件顯示一影像,且控制 器在可決定位置之子集的移動屬於一預定姿態的情形下產 生顯示一回饋指示之影像。在此實施例中,回饋亦改良使 ❹ 用者之體驗。只要回饋可顯示位置之子集的移動屬於預定 姿態,則顯示裝置發信號告知使用者在接觸式螢幕器件上 根據預定姿態而移動的多個組群之一者之物件。該使用者 熟知(基於此回饋)在接觸式螢幕器件上物件之移動成功。 在另一方面,只要不顯示回饋,該使用者熟知該顯示裝置 尚不能夠決定位置之子集的移動是否屬於預定姿態。因 此,該使用者須繼續或重啟在接觸式螢幕器件上之物件的 移動。 在-進-步實施例中,回饋指示包括在指派至多個組群 142603.doc 201015402 之相關者之接觸事件的位置的一指示。此回饋有助於使用 者瞭解哪些物件與多個組群之一者相關。該回饋發信號告 知使用者哪些物件根據預定姿態而在接觸式螢幕器件上移 動。此有助於使用者判定接觸事件與多個組群之一者之相 關疋否匹配其意圖。有可能一個接觸事件並非與組群相關 (因為與此接觸事件相關之物件的移動係不完全對應於預 定姿態)。若使用者之意圖係識別亦作為組群之一部分的 此物件,則該使用者熟知利用物件而對預定姿態之施加並 非完全成功,且其須重新施加預定姿態。 在一實施例中,控制器包括用於儲存預定姿態之一姿 態儲存單元,用於偵測接觸式螢幕器件上之接觸事件之位 置的移動# -移動偵測單&,用⑨決定位置之子集的移動 疋否屬於-預定姿態的m單元’及用於指派經決定其 位置之子集的移動屬於預定姿態之該等接觸事件至多個組 群之相關者的一指派單元。此實施例之單元執行一特定任 務。特定硬體可被用以執行上文提及之單元的特定任務。 或者,一處理器可被用以執行所提及之任務的全部或一子 集。若該姿態儲存單元包括—可重寫記憶體,則可產生利 用儲存-顧客之特定預定姿態於該姿態儲存單元之記憶體 中而客製化該顯示裝置之機會。 ^ 本發明之此等及其他態樣係自下文所描述之實施例而明 ,,且將參考下文所描述之實施例而加以闡明。 【實施方式】 應注意,在不同圖形中具有相同參考數字之項目係具有 142603.doc 201015402 信號。其中若此項目之 須再以洋盡描述對其加 相同結構特徵及相同功能或係相同 功能及/或結構已得以解釋,則無 以重複解釋。 —控制器1 01及一 控制1§ 101包括一 、一決定單元104 二手120之手指正 圖1顯示—第一實施例。其中顯示包括 接觸式螢幕器件106之一顯示裝置1〇〇。 姿態儲存單元102、一移動偵測單元1〇3 及一指派單元1〇5。一第一手110及一第 僅第一手110之兩手指111及Lu entered 'and the display device can 翱F 八文隹J to climb this enough G points and more hands. It should be noted that - the user can use two of the female hands - the screen device. Two, four or five fingers are in contact with the contact type, and in the other embodiment, the objects grouped into a plurality of groups are fingers of a plurality of groups, and wherein the pre-macroscopic energy and the pre-equivalent posture are one One of the user's hands or hands moves ◎ multiple use of a touch screen device. A phoenix eve touches the cover to cry I, and the fingers of the X hand can apply a predetermined posture to the contact glory 15 pieces. The display device is capable of distinguishing between different users (because the contact event that determines that the movement of the subset of locations belongs to the predetermined pose is associated with one of the plurality of uses 142603-doc 201015402). It should be noted that a user can access the touch screen device using two, three, four or five fingers per hand. In this embodiment, the user applies a predetermined posture to the contact type screen device with one hand. In another embodiment, the predetermined gesture is applied by the user using two hands. In a further embodiment, the controller is further configured to determine whether the movement of the subset of locations of the contact events of the plurality of groups is a further predetermined gesture for initiating a subsequent action. When an object of one of the plurality of groups comes into contact with the touch screen device, the objects need to recognize their predetermined posture. The identification is used to determine which subset of contact events form a group. The object of the group then applies a further predetermined gesture as a command for one of the display devices to initiate a subsequent action. Since the object of one of the plurality of groups needs to identify itself before the display device can determine the advanced posture, the display device will not mix contact events belonging to different groups. Further predetermined poses will not result in an undesired decision based on further predetermined poses of contact events belonging to different groups. Thus, the device can be enabled when a different group of objects applies a further predetermined pose to perform their respective lives. Subsequent actions may be performed by the controller or by another portion of the electronic device in the event that the display device is part of an electronic device. Subsequent activation of the action can be accomplished by transmitting a trigger signal to other portions of the electronic device. An example of a subsequent action is changing one of the images displayed on the screen device, or is changing another function relative to other functions of the electronic device (such as, for example, starting to play music). In an embodiment, the touch screen device displays an image, and the controller 142603.doc 201015402 generates an image such that a feedback indication is displayed at the location of the contact event. The feedback improves the user experience (because it makes the user aware of the behavior of the display device). After the user contacts the touch screen device at a particular location, the display device assigns a position to the location where the contact on the object on the touch screen device is detected. The location assigned by the display device is clearly not the same as the location assumed by the user. The difference between the assumed position and the assigned position may be, for example, the result of the disparity. The user's assumption Φ and another difference between the behavior of the display device can be the number of detected contact events. For example, prior to detecting contact, in the event that a user has to utilize an object and contact the touch screen device with a minimum pressure, feedback indicates whether the user object is in sufficient pressure to contact the touch screen device. In another embodiment, the touch screen device displays an image and the controller produces an image showing a feedback indication in the event that the movement of the subset of determinable positions belongs to a predetermined pose. In this embodiment, feedback is also improved to the experience of the user. As long as the movement of the subset of the feedback displayable position belongs to the predetermined posture, the display device signals the user to one of the plurality of groups that are moved according to the predetermined posture on the touch screen device. The user is familiar with (based on this feedback) the successful movement of the object on the touch screen device. On the other hand, as long as the feedback is not displayed, the user is well aware that the display device is not yet able to determine whether the movement of the subset of positions belongs to the predetermined posture. Therefore, the user must continue or restart the movement of the object on the touch screen device. In the forward-forward embodiment, the feedback indication includes an indication of the location of the contact event assigned to the relevant person of the plurality of groups 142603.doc 201015402. This feedback helps users understand which objects are related to one of several groups. The feedback signals the user which objects are moving on the touch screen device in accordance with the predetermined pose. This helps the user to determine if the contact event is related to one of the multiple groups or not. It is possible that a contact event is not related to the group (because the movement of the object associated with this contact event does not exactly correspond to the predetermined pose). If the user's intent is to identify the item that is also part of the group, then the user is well aware that the application of the object to the predetermined pose is not entirely successful and that the predetermined gesture must be reapplied. In one embodiment, the controller includes an attitude storage unit for storing a predetermined posture, and detecting a movement of the position of the contact event on the touch screen device #-moving detection list & The movement of the set 属于 belongs to the m-unit of the predetermined posture and an assignment unit for assigning the movements belonging to the subset of the predetermined posture to the correlators of the plurality of groups. The unit of this embodiment performs a specific task. Specific hardware can be used to perform the specific tasks of the units mentioned above. Alternatively, a processor can be used to perform all or a subset of the tasks mentioned. If the gesture storage unit includes a rewritable memory, an opportunity to customize the display device using the storage-customer specific predetermined gesture in the memory of the gesture storage unit can be generated. These and other aspects of the invention will be apparent from the description of the embodiments described hereinafter. [Embodiment] It should be noted that items having the same reference numerals in different figures have a signal of 142603.doc 201015402. If the item is to be explained by adding the same structural features and the same function or the same function and/or structure, it will not be explained repeatedly. - Controller 1 01 and a Control 1 § 101 includes a decision unit 104 used finger 120 is shown in Fig. 1 - the first embodiment. One of the display devices 1 including the touch screen device 106 is shown. The gesture storage unit 102, a motion detection unit 1〇3, and an assignment unit 1〇5. a first hand 110 and a first finger 110 of the first finger 111 and

接觸該接觸式螢幕器件1〇6 112各自接觸該接觸式螢幕器件於人及8所指示之位置。第 -手120之全部五個手指各自接觸該接觸式螢幕器件於位 置K、L、Μ、N及Ο所指示之位置。 一預定姿態被儲存於姿態儲存單元1()2中。該預定姿離 指示施加屬於多隻手之一者之姿態的手指。在此實施= 中,預定姿態係由該手之手指在接觸式螢幕器件1〇6上朝 向-單-點的-移動而加以界定。該姿態之界定包括朝向 一單一移動之手指數量可係兩個、三個、四個或五個手 指。 第一隻手110之兩個手指及第二隻手120之五個手指接觸 該接觸式螢幕器件106。因此,該接觸式螢幕器件1〇6接收 多個接觸事件。在所示實例中,在手指接觸該接觸式螢幕 器件106之位置接收到七個接觸事件。圖丄中之兩隻手 110、120可屬於不同使用者或同一使用者。 隨後,移動偵測單元1 02偵測位置a、B ; κ、L、μ、N 及〇之移動。位置移動之資訊被傳輸至決定單元104。該 142603.doc -11 · 201015402 決定單元104接收來自姿態儲存單元1〇2之預定姿態之資 訊。基於此資訊,決定單元104決定位置之一子集的移動 是否匹配預定姿態之界定。 在圖1之實例中,手指1 Π及112初始地各自於位置八及b 接觸該接觸式螢幕器件106。兩手指iU&112以朝向點c之 方向在接觸式螢幕器件1〇6上移動。移動偵測單元1〇3偵測 此移動且決定單元104決定該移動是否屬於預定姿態。兩 手才a 111及112朝一共同點之此移動類似於預定姿態並因 此由指派單元105加以決定與第一隻手相關之手指lu及鲁 112屬於一第一組群。 第二隻手120之五個手指亦在接觸式螢幕器件1〇6上移 動。拇指初始地於位置κ接觸該接觸式螢幕器件106並朝向 點p之方向移動,其他四個手指初始地於位置l、m、n& 0各自接觸該接觸式螢幕器件並亦朝向點p之方向移動。決 定單元104決定五個手指之位置的移動匹配預定姿態,因 為該五個手指之移動係朝向一單一點(該實例中係點卩,因 此指派單元105斷定與第二隻手12〇相關之五個手指屬於—❹ 第二組群。 換言之,在決定單元1〇4決定位置之子集的移動係根據 . 手指朝向一單一點之移動的預定姿態的情形下,指派單元 105指派所決定之接觸事件至多隻手之相關者。在圖i之實 例中,由於手指ln及112接觸該接觸螢幕器件106所致的 接觸事件係與手11G相關。由於第二隻手120之手指接觸該 接觸式螢幕器件106並在該接觸式螢幕器件1〇6上移動所致 142603.doc •12- 201015402 的其他五個接觸事件係與手120相關。 此導致接觸事件再分為兩個組群:與第一隻手11〇相關 之一個组群中的接觸事件集,及與第二隻手12〇相關之一 個組群中的接觸事件集。雙手11〇及12〇利用施加預定姿態 ' 於接觸式螢幕器件106上而識別其等之手指作為多個組群 .之一者的一部分。在識別哪些手指屬於哪一隻手之後,顯 不裝置能夠區別由於第一隻手11〇之手指及第二隻手12〇之 φ 手指所致的該等接觸事件。 應注意,對於位置之子集的移動是否屬於預定姿態的決 定可包括一容限機制。該容限可係:多個手指係不完全朝 向一個單一點P移動而朝向一受限尺寸的區域運動,例如 以點P為中心並具有經合適選擇之一半徑的一圓周所形成 之一區域,使得當第二隻手i 2〇之手指鄰近點p彼此接觸 時’該等手指處於該區域内。 圖2顯示一進一步實施例。顯示裝置1〇〇包括接觸式螢幕 Ο 器件1〇6及控制器丨〇1。該控制器101包括姿態儲存單元 102、移動偵測單元103、決定單元104及指派單元1〇5。四 個觸控筆210、211、220及221各自於位置f、G、E&D接 觸該接觸式螢幕器件106。觸控筆210及211形成一第一組 群且觸控筆220及221形成一第二組群。 該實施例之顯示裝置儲存或已儲存兩個預定姿態於姿熊 儲存單元102。第一預定姿態界定:若兩個或多個物件位 置之移動係沿平行線之一方向的轉移,則接觸事件係由於 兩個或多個物件接觸屬於一個組群之接觸式螢幕顯示器件 142603.doc •13- 201015402 所致。第二預定姿態界定:若兩個或多個物件位置之移動 係繞一單一點之一旋轉,則接觸事件係與兩個或多個屬於 一個組群之物件相關。 在圖2之實例中,觸控筆210及211初始地各自於位置ρ及 G接觸該接觸式螢幕器件1〇6。由於這兩個接觸,該接觸式 螢幕器件106接收具有一相關位置之兩個接觸事件。移動 偵測單元1 03彳貞測位置F及G之移動》觸控筆210遵循自位置 F至位置F,之一假想圓周(且點Η係該假想圓周之中心)而在 接觸式螢幕器件106上移動。遵循自位置G至位置G,之一進 一步假想圓周(且點Η係該進一步假想圓周之中心)而在接 觸式螢幕器件106上移動。 隨後,決定單元104決定位置之第一子集的移動是否對 應於預定姿態之一者。觸控筆210及211之移動遵循根據先 鈿所導入之兩預定姿態之一者的一圖案。因此,決定單元 104斷定位置之第一子集的移動屬於預定姿態之一者。隨 後,指派單元105指派由於觸控筆210及211接觸該接觸式 螢幕器件106所致之兩接觸事件至一個組群。 觸控筆220及221係正初始地各自於位置ε及d接觸該接 觸式螢幕器件106。此兩接觸造成接觸式螢幕器件ι〇6額外 接收具個別相關位置之兩接觸事件。觸控筆220自位置E朝 向位置Ε'遵循一直線而在接觸式螢幕器件1〇6上移動。觸 控筆221自位置D朝向位置D’遵循另一直線而在接觸式營幕 器件106上移動。如圖所示,自D至D·之線及自ε至Ε,之線 係彼此平行的。因此,決定單元104斷定:接觸事件之位 142603.doc -14· 201015402 置之一第二子集的移動(該移動係由於觸控筆220及221接 觸該接觸式螢幕器件106所致)屬於兩預定姿態之一者。隨 後’指派單元1〇5指派與觸控筆220及221相關之接觸事件 至一個組群。 •因此’顯示裝置100能夠將接觸事件集再分為子集,因 為一第一組群之觸控筆210及211已施加預定姿態之一者至 接觸式螢幕器件106,且因為一第二組群之觸控筆22〇及 φ 221已施加預定姿態之一者至接觸式螢幕器件106。 應注意’對於位置之子集的移動是否屬於預定姿態的決 定可涉及一容限機制。在預定姿態界定繞一單一點之一個 組群之物件的旋轉的情形下,該容限可係:假想圓周之中 〜不το全係一單一點而係在距一單一點一小距離△之範圍 内。另一谷限可係物件並非遵循一完全假想圓周。在預定 姿態界定一個組群之物件的轉移時,該容限可係:該等物 件所遵循之線並非完全平行或完全直線。 • 此外應注意,多於兩個之預定姿態可被儲存於姿態儲存 單元102中。例如,同樣參考圖i所論述之姿態可被儲存’ 以使得顯示裝置能夠債測屬於同一隻手之手指的組群及歸 屬相同之觸控筆之群組。另外又或者,其他移動可被儲存 為一個或多個預定姿態。例如,多數預定姿態可經健存以 偵測屬於同一組群之物件的一寬鬆(sloppy)旋轉移動。 圖3顯示另實施例。顯示裝置300包括一接觸式螢幕器 件306及一控制器301。_隻手31〇利用兩個手指η!及3^ 於緊鄰點Q之位置接觸該接觸式榮幕器件3〇6。冑中亦顯示 142603.doc -15- 201015402 各自於位置u及v接觸該接觸式螢幕器件306的兩個觸控筆 320及321。另兩個觸控筆331及332接觸該接觸式螢幕器件 306。回饋指示322、323及333被顯示於該接觸式螢幕器件 306之上。 接觸式螢幕器件306於各自位置接收由手指311、312所 致之接觸事件及由觸控筆320、321、331及332所致之接觸 事件。控制器301接收來自接觸式螢幕器件3〇6之接觸事件 並偵測在該接觸式螢幕器件306上之接觸事件之位置移 動。該控制器301決定位置之一子集的移動是否屬於預定 姿態。預定姿態係由遠離一單一點之一個組群之兩個或多 個物件之位置的一移動加以界定。手指311及312於緊鄰Q 點之位置初始地接觸該接觸式螢幕器件306。手指3 11係以 朝向位置R之方向移動。手指312係以朝向位置S之方向移 動。此位置之移動匹配預定姿態,因為物件遠離該單一點 移動。由於結果係所決定之位置之子集的移動屬於預定姿 態,故控制器301指派對應於手指311及312之接觸的接觸 事件至一第一組群。 圖3中顯示觸控筆320及321在自緊鄰點T之位置各自朝向 位置U及位置V移動之後係處於接觸式螢幕器件306上之一 位置。觸控筆320及321之移動確實對應於遠離一單一點之 移動的預定姿態。觸控筆320及321之接觸事件係由控制器 3〇1指派至一第二組群。 顯示裝置300之接觸式螢幕器件306顯示一影像。控制器 301產生該影像。該影像顯示回饋指示322、323及333。 142603.doc •16- 201015402 所顯示之橢圓323係顯示控制器301已決定位置之子集的 移動屬於預定姿態的回饋指示。圖3之實施例的實例中緣 不橢圓323圍繞觸控筆320及321接觸該接觸式螢幕器件 之位置。回饋指示323在決定位置之子集的移動屬於預定 姿二之後立即得以顯示。應注意,其他形式之回饋指示亦 得以顯示。其他實例係圍繞被指派給第二組群之接觸事件 之位置的其他幾何形狀,並改變在相對於該第二組群中接 • 觸事件之位置的一區域中的前景色彩或背景色彩、恰在決 定位置之移動屬於預定姿態之後改變整體影像之前景色彩 或背景色彩一較短時間’或例如在接觸式螢幕器件3〇6之 一位置顯示指示預定姿態係得以決定的一文本。 回饋指示322係在被指派至該第二組群之接觸事件之位 置而被顯示在接觸式螢幕器件3〇6上。圖3之實例的指示 322係正閃爍(blink)圍繞觸控筆320及321接觸該接觸式勞 幕器件306之位置。其他回饋指示(其等顯示給該顯示裝置 Φ 300之使用者哪些接觸事件被指派至組群)係正以一特定色 彩在一組群之接觸事件之位置顯示特定幾何形狀。該等幾 何形狀有可能閃爍。為了使用者之便利,不同色彩之不同 幾何形狀可被用於不同組群。回饋指示之另一實例(其顯 示哪些接觸事件屬於該組群)係正以線連接接觸事件之位 置。 回饋指示333係接觸事件係由接觸式螢幕器件3〇6接收之 位置的一指示。在觸控筆33丨及332接觸該接觸式螢幕器件 306的位置回饋指示333係一 “X”。在接觸接事件被接收的 142603.doc •17- 201015402 位置回饋指示之其他實例係一游標、一指針、一箭頭、_. 小圓周、背景色彩之一局部改變或例如前景色彩之一局部 改變。回饋指示333不同於回饋指示322。回饋指示333被 顯示於接觸事件未與多個組群之一者相關的位置,且回饋 指示322被顯示於與多個組群之一者相關的位置。為了使 用者之便利’回饋指示322及回饋指示333須彼此不同。 圖4A顯示包括一電子器件用以觀看及編輯圖像的一操作 臺400。該電子器件包括根據本發明之顯示裝置。圖中顯 不該顯示裝置之接觸式螢幕器件4〇6。該電子器件(包括該魯 顯示裝置之控制器)之其他部分被假定為於該操作臺中加 以建構。接觸式螢幕器件4〇6顯示給該操作臺之使用者(圖 中未繪示使用者)之數個小圖像41〇。一第一大圖像4ιι被 顯示給一第一使用者且一第二大圖像412被顯示給一第二 使用者。 操作臺400之使用者須利用施加一預定姿態於接觸式Each of the contact screen devices 1 〇 6 112 is in contact with the contact screen device at the location indicated by the person and 8. All five fingers of the first hand 120 are in contact with the position of the touch screen device at positions K, L, Μ, N, and Ο. A predetermined posture is stored in the posture storage unit 1 () 2. The predetermined posture indicates that a finger belonging to one of the plurality of hands is applied. In this implementation =, the predetermined attitude is defined by the finger-to-single-point movement of the finger of the hand on the touch screen device 1〇6. The definition of the gesture includes two, three, four or five fingers pointing towards a single moving finger. The two fingers of the first hand 110 and the five fingers of the second hand 120 contact the touch screen device 106. Therefore, the touch screen device 1〇6 receives a plurality of contact events. In the illustrated example, seven contact events are received at the location where the finger contacts the touch screen device 106. The two hands 110, 120 in the figure may belong to different users or the same user. Subsequently, the motion detecting unit 102 detects the movement of the positions a, B; κ, L, μ, N, and 〇. The information of the positional movement is transmitted to the decision unit 104. The 142603.doc -11 · 201015402 decision unit 104 receives the information from the predetermined posture of the gesture storage unit 1〇2. Based on this information, decision unit 104 determines whether the movement of a subset of the locations matches the definition of the predetermined pose. In the example of FIG. 1, fingers 1 and 112 initially contact the touch screen device 106 at positions VIII and b, respectively. The two-finger iU& 112 moves on the touch screen device 1〇6 in the direction toward the point c. The motion detecting unit 1 侦测 3 detects the movement and the decision unit 104 determines whether the movement belongs to a predetermined posture. This movement of the two hands a 111 and 112 toward a common point is similar to the predetermined posture and thus the finger unit 1 and the finger 112 associated with the first hand are determined by the assigning unit 105 to belong to a first group. The five fingers of the second hand 120 are also moved on the touch screen device 1〇6. The thumb initially contacts the touch screen device 106 at position κ and moves toward the point p, the other four fingers initially contacting the contact screen device at positions l, m, n & 0 and also toward the point p mobile. The decision unit 104 determines that the movement of the positions of the five fingers matches the predetermined posture, because the movement of the five fingers is toward a single point (the point in this example is 卩, so the assignment unit 105 concludes that the fifth hand is related to the second hand 12〇 The finger belongs to the second group. In other words, in the case where the decision unit 1〇4 determines the movement of the subset of positions according to the predetermined posture of the movement of the finger toward a single point, the assignment unit 105 assigns the determined contact event. At most one hand, in the example of Figure i, the contact event due to the contact of the finger ln and 112 with the touch screen device 106 is related to the hand 11G. Since the finger of the second hand 120 contacts the touch screen device 106 and moving on the touch screen device 1〇6 142603.doc •12- 201015402 The other five contact events are related to the hand 120. This causes the contact event to be further divided into two groups: with the first The set of contact events in a group associated with the hand 11〇, and the set of contact events in a group associated with the second hand 12〇. Hands 11〇 and 12〇 are applied by applying a predetermined posture The touch screen device 106 recognizes the finger of the touch screen 106 as part of a plurality of groups. After identifying which fingers belong to which hand, the display device can distinguish the finger from the first hand 11 and The contact event caused by the φ finger of the second hand 12 。. It should be noted that the decision as to whether the movement of the subset of positions belongs to the predetermined posture may include a tolerance mechanism. The tolerance may be: multiple fingers are not Moving completely toward a single point P towards a restricted size area, such as an area centered at point P and having a circumference that is suitably selected to have a radius, such that when the second hand i 2 The finger is in contact with the point p when they are in contact with each other. Figure 2 shows a further embodiment. The display device 1A includes a touch screen device 1〇6 and a controller 丨〇1. The controller 101 includes The gesture storage unit 102, the motion detection unit 103, the decision unit 104, and the assignment unit 1〇5. The four styluses 210, 211, 220, and 221 each contact the touch screen device 106 at positions f, G, E&D; .touch The pens 210 and 211 form a first group and the styluses 220 and 221 form a second group. The display device of this embodiment stores or stores two predetermined postures in the bear storage unit 102. The first predetermined posture is defined : If the movement of two or more object positions is in the direction of one of the parallel lines, the contact event is because the two or more objects contact the contact screen display device belonging to one group 142603.doc •13- 201015402 The second predetermined attitude defines that if the movement of two or more object positions rotates around one of the single points, the contact event is related to two or more objects belonging to one group. In the example of FIG. 2, styluses 210 and 211 initially contact the touch screen device 1〇6 at positions ρ and G, respectively. Due to these two contacts, the touch screen device 106 receives two contact events having an associated location. The motion detecting unit 103 detects the movement of the positions F and G. The stylus 210 follows from the position F to the position F, and one of the imaginary circumferences (and the point is the center of the imaginary circumference) is in the touch screen device 106. Move on. One of the imaginary circumferences (and the center of the further imaginary circumference) is moved from the position G to the position G to move over the contact screen device 106. Subsequently, decision unit 104 determines if the movement of the first subset of positions corresponds to one of the predetermined poses. The movement of the styluses 210 and 211 follows a pattern of one of the two predetermined postures introduced according to the preceding. Therefore, the decision unit 104 determines that the movement of the first subset of positions belongs to one of the predetermined postures. The assignment unit 105 then assigns two contact events due to the stylus 210 and 211 contacting the touch screen device 106 to a group. The styluses 220 and 221 are initially in contact with the touch screen device 106 at positions ε and d, respectively. These two contacts cause the touch screen device ι 6 to additionally receive two contact events with individual associated locations. The stylus 220 moves on the touch screen device 1〇6 from the position E toward the position Ε' following the straight line. The touch pen 221 moves over the contact camping device 106 from position D toward position D' following another line. As shown, the lines from D to D· and from ε to Ε are parallel to each other. Therefore, the determining unit 104 concludes that the movement of the second subset of the position 142603.doc -14· 201015402 of the contact event (the movement is caused by the stylus 220 and 221 contacting the contact screen device 106) belongs to two One of the predetermined postures. The subsequent 'assignment unit 1〇5 assigns contact events associated with styluses 220 and 221 to a group. • The display device 100 is therefore capable of subdividing the set of contact events into subsets because a first group of styluses 210 and 211 have applied one of the predetermined gestures to the touchscreen device 106, and because of a second group The group of styluses 22 and φ 221 have applied one of the predetermined postures to the touch screen device 106. It should be noted that the decision as to whether the movement of the subset of locations belongs to a predetermined pose may involve a tolerance mechanism. In the case where the predetermined posture defines the rotation of the object around a group of a single point, the tolerance may be: imaginary circumference ~ not τ ο all the single point and a small distance from a single point △ Within the scope. Another limited item can not follow a completely imaginary circumference. When the predetermined pose defines the transition of an object of a group, the tolerance may be such that the lines followed by the objects are not completely parallel or completely straight. • It should also be noted that more than two predetermined gestures may be stored in the gesture storage unit 102. For example, the gestures discussed also with reference to Figure i can be stored' to enable the display device to debunk the group of fingers belonging to the same hand and the group of styluses belonging to the same. Alternatively, other movements may be stored as one or more predetermined gestures. For example, most predetermined gestures may be stored to detect a sloppy rotational movement of objects belonging to the same group. Figure 3 shows another embodiment. Display device 300 includes a touch screen device 306 and a controller 301. _ hand 31〇 uses two fingers η! and 3^ to contact the contact honing device 3〇6 at a position immediately adjacent to the point Q. Also shown is 142603.doc -15-201015402 which are in contact with the two styluses 320 and 321 of the touch screen device 306 at positions u and v, respectively. The other two styluses 331 and 332 are in contact with the touch screen device 306. Feedback indicators 322, 323, and 333 are displayed on the touch screen device 306. The touch screen device 306 receives contact events caused by the fingers 311, 312 and contact events caused by the styluses 320, 321, 331, and 332 at respective locations. The controller 301 receives contact events from the touch screen device 3〇6 and detects positional movement of contact events on the touch screen device 306. The controller 301 determines whether the movement of a subset of the positions belongs to a predetermined pose. The predetermined attitude is defined by a movement of the position of two or more objects away from a group of a single point. Fingers 311 and 312 initially contact the touch screen device 306 at a position immediately adjacent to the Q point. The finger 3 11 moves in the direction toward the position R. The finger 312 is moved in the direction toward the position S. The movement of this position matches the predetermined pose because the object moves away from the single point. Since the movement of the subset of the determined position of the result belongs to the predetermined posture, the controller 301 assigns a contact event corresponding to the contact of the fingers 311 and 312 to a first group. The styluses 320 and 321 are shown in FIG. 3 at a position on the touch screen device 306 after moving from the position immediately adjacent the point T toward the position U and the position V. The movement of the styluses 320 and 321 does correspond to a predetermined attitude away from the movement of a single point. The contact events of styluses 320 and 321 are assigned by controller 3〇1 to a second group. The touch screen device 306 of the display device 300 displays an image. The controller 301 generates the image. The image displays feedback indications 322, 323, and 333. 142603.doc • 16- 201015402 The ellipse 323 shown in the display 301 indicates that the controller 301 has determined that the movement of the subset of positions belongs to the feedback indication of the predetermined posture. In the example of the embodiment of Fig. 3, the non-ellipse 323 surrounds the stylus 320 and 321 in contact with the position of the touch screen device. The feedback indication 323 is displayed immediately after the movement of the subset of the determined position belongs to the predetermined posture. It should be noted that other forms of feedback instructions are also displayed. Other examples relate to other geometries that are assigned to the location of the contact event of the second group and change the foreground color or background color in an area relative to the location of the touch event in the second group, Changing the overall image foreground color or background color for a short time after determining that the movement of the position belongs to the predetermined posture or displaying a text indicating that the predetermined posture is determined, for example, at one of the touch screen devices 3〇6. The feedback indication 322 is displayed on the touch screen device 3〇6 at the location assigned to the contact event of the second group. The indication 322 of the example of FIG. 3 is blinking the location around the stylus 320 and 321 that contacts the contact screen device 306. Other feedback indications (which are displayed to the user of the display device Φ 300 which contact events are assigned to the group) are displaying a particular geometry at a location of the contact event of a group with a particular color. These geometric shapes may flicker. For the convenience of the user, different colors can be used for different groups of geometric shapes. Another example of a feedback indication (which shows which contact events belong to the group) is where the contact event is being wired. The feedback indication 333 is an indication of the location of the contact event being received by the touchscreen device 3〇6. The feedback indication 333 is a "X" when the stylus pins 33 and 332 are in contact with the touch screen device 306. 142603.doc • 17- 201015402 The other instance of the location feedback indication is a cursor, a pointer, an arrow, a small circumference, a local change in background color, or a local change such as one of the foreground colors. The feedback indication 333 is different from the feedback indication 322. The feedback indication 333 is displayed at a location where the contact event is not associated with one of the plurality of groups, and the feedback indication 322 is displayed at a location associated with one of the plurality of groups. For the convenience of the user, the feedback indication 322 and the feedback indication 333 must be different from each other. Figure 4A shows an operator station 400 that includes an electronic device for viewing and editing images. The electronic device comprises a display device according to the invention. The contact screen device 4〇6 of the display device is shown. The other part of the electronic device (including the controller of the Lu display device) is assumed to be constructed in the console. The touch screen device 4〇6 displays a number of small images 41〇 to the user of the console (the user is not shown). A first large image 4ιι is displayed to a first user and a second largest image 412 is displayed to a second user. The user of the console 400 must apply a predetermined attitude to the contact type.

幕器件槪而識別其等手之手指。圖4之實施例的預定姿 用手之手指遠離一單'點之移動的姿態。隨後,該 定姿態可由一進-步預定姿態遵循以啟動-隨後動作。. -步預定姿態係正利用手之手指繞一單一點之旋轉。隨彳 動作係旋轉圖像。 步 大 圖4A中在第二大圖像412之區域中顯 預定姿態之序列。圖4A之此部分已 示預定姿態及進一 於圖4B中得以放 第 使用者初始地利用其一隻手 之 個手指各自於位置 I42603.doc •18· 201015402 W、X及Y接觸該接觸式螢幕器件4〇6。該三手指之接觸導 致二個接觸事件。隨後,該三手指各自在該接觸式螢幕器 件406上以朝向W,、x·及γι之方向移動。全部三手指移動 遠離單一點Ζ。控制器偵測位置之移動,且該控制器決定 -該三手指之位置的移動對應於預定姿態,並指派該三接觸 事件至與使用者之手相關之—個組群。在一進一步實施例 中,接觸式螢幕器件406經一回饋指示(預定姿態以手指之 移動而加以識別)顯不一影像。 顯不裝置之控制器係進一步經建構以決定組群中之接觸 事件之位置之子集的移動是否屬於一進一步預定姿態。第 一使用者繼續利用將各手指自W'移動至W,'、自X'移動至 X及自Υ’移動至Υ"而在該接觸式螢幕器件4〇6上的移動。 該三手指之位置的移動匹配該進一步預定姿態,因為該三 手指之位置的移動係該使用者之手的手指繞單一點ζ的旋 轉。 Φ 由於決定進一步之預定姿態,故控制器啟動隨後動作。 因為進一步預定姿態被施加於接觸式螢幕器件406之第二 大圖像412内,故已啟動之隨後動作係旋轉圖像。控制器 旋轉該接觸式螢幕器件406上之圖像,且在一替代實施例 中’該控制器發送一中斷信號至該電子器件之另一部分, 其執行旋轉圖像之動作。 第一圖像411被顯示給第一使用者。該第一使用者可利 用施加預定姿態而啓動相對於該第一圖像41丨之動作,以 識別由該進一步預定姿態遵循之其手之手指。若該第一使 142603.doc -19· 201015402 用者施加該等姿態於該第一圖像411之區域内,則隨後動 作將係相對於該第一圖像411。 用以觀看及編輯圖像之電子裝置能夠區分第一使用者與 第二使用者之間的接觸事件。因此,兩個或多個人可圍坐 於操作臺400並同時使用該電子器件來觀看及編輯圖像。 應注意,進一步之預定姿態並不限於手之手指繞單—點 之旋轉,且應注意’隨後動作並不限於旋轉第一圖像411 或4第二圖像412。其他進一步預定姿態之實例係手之手指 的一轉移、移動手指遠離單一點或例如移動手指退回至單 _ 一點。在轉移之情形下,已啟動之隨後動作可係在螢幕上 移動第一圖像411或第二圖像412,又或者係在僅顯示第一 圖像411或第二圖像412之一部分的情形下翻閱第一圖像 411或第二圖像412。在進一步移動遠離單一點之情形下, 例如在移動遠離單一點之預定姿態之後施加該預定姿態之 隋形下’隨後動作可係放大(z〇〇rn int〇)為第一圖像411或 第二圖像412之一部分。在移動退回至單一點之情形下, 例如在移動遠離單一點之預定姿態之後施加該預定姿態之 〇 清I下’隨後動作可係縮小(z〇〇rn out)之圖像。 應注意,根據本發明之顯示裝置亦可被用於其他電子器 件’例如備忘電子記事本、膝上型電腦、電子圖書閱讀器 或例如電腦螢幕。 圖5顯示包括一接觸式螢幕器件506及一控制器501之一 顯不裝置500的一方塊圖。該控制器5〇1包括一姿態儲存單 疋502、一移動偵測單元503、一決定單元504及一指派單 142603.doc •20- 201015402 元 505。 接觸式螢幕器件506接收由物件接觸該接觸式螢幕器件 506所致之在各自位置的接觸事件。接觸事件係以一信號 之形式被發送至移動偵測單元5〇3。被接收於接觸式螢幕 器件506上之接觸事件係被偵測於正常時刻。該移動偵測 單元503偵測接觸事件之位置的移動。該等位置之移動係 藉由在連續偵測時刻持續追蹤在該等位置中之改變而偵 ^ 測。 接觸事件之位置移動係以另一信號之形式被轉移至決定 單元504。該決定單元5〇4針對每一接觸事件接收描述位置 之移動之位置序列或其接收位置之改變序列。該決定單元 504亦以一進一步信號之形式自姿態儲存單元502接收預定 姿態。 預疋姿態被儲存在姿態儲存單元5〇2中。該姿態儲存單 元502包含用以儲存預定姿態之記憶體。在揮發性記憶體 ® (例如隨機存取記憶體(RAM))之情形下,預定姿態當顯示 裝置開始運作時被載入該記憶體中,且該預定姿態係可能 在運作之一時段期間以預定姿態之一更新版本加以重新載 入。在非揮發性記憶體之情形下,預定姿態之資訊係在姿 態儲存單元502之產製期間被程式化至該記憶體中(例如在 唯讀記憶體之情形下),係在安裝期間被程式化至該記 憶體中(例如,至一 EPROM中)或係在顯示裝置之先前使用 期間被負載至該記憶體(例如,有可能在使用快閃記憶體 的情形下)。 142603.doc -21- 201015402 24可以多個m撲結構之—描述形式連同 =需要)該拓撲結構中的改變及相對於該拓撲結構的幾何 貝/而被儲存一拓撲結構描述哪一點係處於另一點之左 雜右側、下側或上侧,且幾何資訊描述某些點之間的距 如’在移動遠離一單一點之兩個物件的姿態中,拓 、。構係點b左側之一點a。此描述不包括隨時間改變之 抬撲結構’但包括關於初始地小於一臨限值並隨時間增加 之a點與b點之間之距離的眘印 门t距離的貝訊。另一描述及儲存預定姿離 之:式可係向量之形式並開始於—特定點,例如在移動遠 離一單一點之三個物件的姿態中,將存在開始於經定位成 =鄰一單-中心點之位置的三個向量,且該三向量全部被 定向遠離該單一中心點。 決定單元504係專用硬體,以執行圖案識別之一形式。 預定姿態描述位置之移動的圖案,且該圖案必須與全部所 接收之接觸事件之位置之一子集的移動匹配。圖案識別可 係轉移、縮放及旋轉不變性’並視需要包含其他失真圖案 識別機制。在決定單元504決定位置之一子集的移動屬於 預定姿態的情形下,該決定單元504以信號之形式發送關 於接觸事件之資訊至指派單元5〇5,該等接觸事件被偵測 到其位置之子集的移動屬於預定姿態。 該指派單元505接收到來自決定單元5〇4之一信號而指派 接觸事件至多個組群之一者。 圖6顯示本發明之方法的流程圖。在步驟6〇1中,接觸式 螢幕器件106接收接觸事件。該等接觸事件係由多個組群 142603.doc -22- 201015402 之物件在各自位置接觸該接觸式螢幕器件1〇6所致。在步 驟6〇2中,偵測到接觸式螢幕器件1〇6上之接觸事件之位置 的移動。在步驟603中’可決定位置之一子集的移動是否 屬於預定姿態。預定姿態指示與位置之子集相關之物件屬 w個组群之-者。若位置之_子集之移動對應於預= • 態,則被偵測到對應之接觸事件被指派至步驟604中多個 群體之相關者。若位置之一子集的移動不屬於預定姿態, φ 則不採取進一步之步驟。 在步驟605中可決定多個組群之相關者之接觸事件之位 置之子集的移動是否屬於一進一步預定姿態。若該多個組 群之相關者之接觸事件之位置之子集的移動對應於一進一 步預定姿態,則在步驟606中一隨後動作係加以啟動。若 該多個組群之相關者之接觸事件之位置之子集的移動不屬 於一進一步預定姿態,則不採用進一步之步驟。 應注意,上述實施例繪示(而非限制)本發明,且熟習此 • 項技術者將能夠在不脫離所附申請專利範圍之範疇下設計 多種替代實施例。顯示裝置可同時接收來自觸控筆及手指 之接觸事件,僅接收來自觸控筆之接觸事件或僅接收來自 手指之接觸事件。觸控筆以外之其他物件亦可被使用,例 如金。筆、圓珠筆、件,或其他用於遊戲之塊件(例如象棋 及西洋棋之棋子)’或任何由接觸式螢幕器件所偵測(當物 件接觸該接觸式螢幕器件時)之其他物件。上述實施例不 限於所揭不之預定姿態或所揭示之預定姿態的組合。顯示 裝置可決疋位置之子集的移動是否屬於預定姿態之任何集 142603.doc •23- 201015402 所論述之姿態集(移動遠離一單一點、移動朝向一單一 點、轉移組群之物件或繞一單一點旋轉組群之物件)僅係 一實例。 在申請專利範圍中,置於括弧内之任何參考符號不得被 解釋為限制申請專利範圍。動詞「包括」及其組合不排除 除申請專利範圍所述以外之元件或步驟。元件前之冠詞 「一」不#除複數個此等元件之存在。纟發明可借助於包 括多數不同元件之硬體及借助力一合適之經程式化之電腦 而加以實施。在列舉數個構件之器件之申請專利範圍中,參 此等構件之數個可由硬體之—者及相同硬體項目所體現。 事實僅係相互不同之附屬請求項中所敍述之特定措施並非 指示無法使用此等措施之組合而有所助益。 【圖式簡單說明】 圖1示意性顯示接收來自雙手之輸入的一顯示裝置, 圖丁意I·生顯示接收來自;^组群之物件之輸入的一顯 示裝置, 圖3示意性顯示接收來自一隻手之預定姿態其接收兩© 、、且群之觸控筆之輸入並顯示不同回饋指示的一顯示裝置, 圖下意14顯不包含一電子器件用以觀看及編輯圖像的 - 一操作臺, 圖4B示意性顯示一預定姿態及由-使用者施加於圖4A 之電子器件之—進—步預定姿態的—放大圖, 圖5顯示該顯示裝置之-方塊圖,且 圖6顯示根據本發明之方法的—流程圖。 142603.doc -24· 201015402 【主要元件符號說明】 顯示裝置 控制器 姿態儲存單元 移動偵測單元 決定單元 指派單元 接觸式螢幕器件 手 物件/手指 物件 物件 100、300、500 101 、 301 、 501 102 ' 502 103 、 503 104 ' 504 105 、 505The screen device recognizes its finger. The predetermined posture of the embodiment of Fig. 4 is a gesture of moving away from a single 'point' with the finger of the hand. Subsequently, the gesture can be followed by a forward-step predetermined gesture to initiate-subsequent action. - The step predetermined posture is using a finger of the hand to rotate around a single point. The action system rotates the image. Step Large In Figure 4A, a sequence of predetermined poses is displayed in the region of the second largest image 412. The portion of FIG. 4A has been shown in a predetermined posture and is further advanced in FIG. 4B. The user initially uses one of the fingers of one of the hands to contact the contact screen at position I42603.doc •18·201015402 W, X and Y. Device 4〇6. The three-finger contact causes two contact events. Subsequently, the three fingers are each moved on the touch screen device 406 in the direction of W, x, and γι. Move all three fingers away from a single point. The controller detects the movement of the position, and the controller determines that the movement of the position of the three fingers corresponds to the predetermined posture and assigns the three contact event to a group associated with the user's hand. In a further embodiment, the touch screen device 406 displays a different image via a feedback indication (the predetermined gesture is recognized by the movement of the finger). The controller of the display device is further configured to determine whether the movement of the subset of locations of the contact events in the group belongs to a further predetermined pose. The first user continues to utilize the movement of each finger from W' to W, 'from X' to X and from 'to Υ" to the touch screen device 4〇6. The movement of the position of the three fingers matches the further predetermined posture because the movement of the position of the three fingers is the rotation of the finger of the user's hand about a single point. Φ Since the decision is made to a further predetermined attitude, the controller initiates the subsequent action. Since a further predetermined pose is applied to the second largest image 412 of the touch screen device 406, the subsequent action that has been initiated is to rotate the image. The controller rotates the image on the touch screen device 406, and in an alternate embodiment the controller sends an interrupt signal to another portion of the electronic device that performs the act of rotating the image. The first image 411 is displayed to the first user. The first user can initiate an action relative to the first image 41 by applying a predetermined gesture to identify a finger of the hand that is followed by the further predetermined gesture. If the first 142603.doc -19·201015402 user applies the gesture to the region of the first image 411, then the subsequent action will be relative to the first image 411. The electronic device for viewing and editing the image is capable of distinguishing between contact events between the first user and the second user. Thus, two or more people can sit around the console 400 and simultaneously use the electronics to view and edit the image. It should be noted that the further predetermined posture is not limited to the rotation of the finger of the hand around the single point, and it should be noted that the subsequent action is not limited to rotating the first image 411 or the second image 412. An example of other further predetermined gestures is a transfer of a finger of the hand, moving the finger away from a single point or, for example, moving the finger back to a single point. In the case of a transfer, the subsequent action that has been initiated may be to move the first image 411 or the second image 412 on the screen, or in the case where only one of the first image 411 or the second image 412 is displayed. The first image 411 or the second image 412 is flipped down. In the case of moving further away from a single point, for example, after applying a predetermined posture after moving away from a single point, the subsequent action may be zoomed in as a first image 411 or a One of the two images 412. In the case where the movement is retracted to a single point, for example, after the predetermined posture is moved away from the single point, the predetermined gesture is applied, and the subsequent action may be an image that is zoomed out. It should be noted that the display device according to the present invention can also be used for other electronic devices such as a memo electronic notebook, a laptop, an e-book reader or, for example, a computer screen. Figure 5 shows a block diagram of a display device 506 including a touch screen device 506 and a controller 501. The controller 5.1 includes a gesture storage unit 502, a motion detection unit 503, a decision unit 504, and an assignment sheet 142603.doc • 20-201015402 505. The touch screen device 506 receives contact events at respective locations caused by the object contacting the touch screen device 506. The contact event is sent to the motion detecting unit 5〇3 in the form of a signal. Contact events received on the touch screen device 506 are detected at normal times. The motion detecting unit 503 detects the movement of the position of the contact event. The movement of the positions is detected by continuously tracking changes in the positions at successive detection times. The positional movement of the contact event is transferred to decision unit 504 in the form of another signal. The decision unit 5〇4 receives, for each contact event, a sequence of changes in the sequence of positions describing the movement of the position or its received position. The decision unit 504 also receives the predetermined pose from the gesture storage unit 502 in the form of a further signal. The preview posture is stored in the posture storage unit 5〇2. The attitude storage unit 502 includes a memory for storing a predetermined posture. In the case of a volatile memory® (such as random access memory (RAM)), the predetermined pose is loaded into the memory when the display device begins to operate, and the predetermined pose may be during a period of operation One of the predetermined poses is updated and reloaded. In the case of non-volatile memory, the information of the predetermined posture is programmed into the memory during the production of the posture storage unit 502 (for example, in the case of read-only memory), and is programmed during installation. The memory is loaded into the memory (eg, into an EPROM) or is loaded into the memory during previous use of the display device (eg, where flash memory is likely to be used). 142603.doc -21- 201015402 24 can be a plurality of m-hop structures - description form along with = need) changes in the topology and stored relative to the geometry of the topology - a topology describes which point is in another The left side of the point is on the right side, the lower side or the upper side, and the geometric information describes the distance between certain points such as 'in the posture of two objects moving away from a single point, extension. Construct a point a on the left side of point b. This description does not include the riser structure as a function of time' but includes a telegram of the gated distance t with respect to the distance between point a and point b that is initially less than a threshold and increases over time. Another description and storage of the predetermined position: the form can be in the form of a vector and start at a specific point, for example in the pose of three objects moving away from a single point, the presence will begin to be positioned as = adjacent one - Three vectors at the position of the center point, and all three vectors are oriented away from the single center point. The decision unit 504 is dedicated hardware to perform one form of pattern recognition. The predetermined pose describes the pattern of movement of the position, and the pattern must match the movement of a subset of all of the locations of the received contact events. Pattern recognition can be transferred, scaled, and rotated invariant' and includes other distortion pattern recognition mechanisms as needed. In the case where the decision unit 504 determines that the movement of a subset of the positions belongs to the predetermined posture, the decision unit 504 transmits information about the contact event to the assignment unit 5〇5 in the form of a signal, and the contact events are detected at their positions. The movement of the subset belongs to a predetermined pose. The assigning unit 505 receives a signal from one of the decision units 5〇4 and assigns a contact event to one of the plurality of groups. Figure 6 shows a flow chart of the method of the present invention. In step 6.1, the touch screen device 106 receives the contact event. The contact events are caused by the objects of the plurality of groups 142603.doc -22- 201015402 contacting the contact screen device 1〇6 at respective locations. In step 6〇2, the movement of the position of the contact event on the touch screen device 1〇6 is detected. In step 603, it can be determined whether the movement of a subset of the locations belongs to a predetermined pose. The object whose predetermined posture indicates that it is related to the subset of positions belongs to the group of w groups. If the movement of the subset of positions corresponds to the pre-state, then the corresponding contact event is detected to be assigned to the relevant group of the plurality of groups in step 604. If the movement of a subset of the positions does not belong to the predetermined pose, φ does not take further steps. In step 605, it may be determined whether the movement of the subset of locations of the contact events of the plurality of groups belongs to a further predetermined pose. If the movement of the subset of locations of the contact events of the plurality of groups corresponds to a further predetermined pose, then a subsequent action is initiated in step 606. If the movement of the subset of locations of the contact events of the plurality of groups is not in a further predetermined pose, no further steps are taken. It is to be noted that the above-described embodiments are illustrative and not restrictive, and that those skilled in the art will be able to design various alternative embodiments without departing from the scope of the appended claims. The display device can simultaneously receive contact events from the stylus and the finger, receive only contact events from the stylus or only receive contact events from the finger. Other items than the stylus can also be used, such as gold. A pen, ballpoint pen, piece, or other piece of material used in a game (such as chess and chess pieces) or any other object detected by a touch screen device (when an object contacts the touch screen device). The above embodiments are not limited to the combination of the predetermined posture or the disclosed predetermined posture. The display device can determine whether the movement of the subset of positions belongs to any set of predetermined poses 142603.doc • 23- 201015402 The set of gestures discussed (moving away from a single point, moving toward a single point, transferring objects of a group, or winding a single An object of a little rotation group) is only an example. In the scope of the patent application, any reference signs placed in parentheses shall not be construed as limiting the scope of the patent application. The verb "comprise" and its conjugations does not exclude the element or the The article before the component "一"不#except the existence of a plurality of such components. The invention can be implemented by means of a hardware comprising a plurality of different components and by means of a suitable stylized computer. In the scope of the patent application for a device that enumerates several components, several of the components may be embodied by hardware and the same hardware project. The fact that the specific measures described in the sub-claims that are different from each other does not indicate that the combination of these measures cannot be used. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic view showing a display device that receives input from both hands, and FIG. 3 shows a display device that receives input from an object of the group, and FIG. 3 schematically shows reception. A display device from a predetermined posture of a hand that receives input of two 、, and a group of stylus pens and displays different feedback indications, the figure below 14 does not include an electronic device for viewing and editing images - FIG. 4B is a schematic enlarged view showing a predetermined posture and a predetermined posture of the electronic device of the electronic device of FIG. 4A, and FIG. 5 is a block diagram of the display device, and FIG. A flow chart showing the method according to the invention. 142603.doc -24· 201015402 [Description of main component symbols] Display device controller attitude storage unit Motion detection unit decision unit assignment unit contact screen device hand object/finger object object 100, 300, 500 101, 301, 501 102 ' 502 103 , 503 104 ' 504 105 , 505

106、306、406、506 110 、 120 、 310 111 、 112 、 311 、 312 210、211、220、221 320 、 321 、 331 、 332 322 、 323 、 333 回饋指示 601 接收(步驟) 602 偵測(步驟) 603 決定(步驟) 604 指派(步驟) 605 決定(步驟) 606 啟動(步驟) 142603.doc -25-106, 306, 406, 506 110, 120, 310 111, 112, 311, 312 210, 211, 220, 221 320, 321 , 331 , 332 322 , 323 , 333 feedback indication 601 reception (step) 602 detection (step 603 Decision (step) 604 Assignment (step) 605 Decision (step) 606 Start (step) 142603.doc -25-

Claims (1)

201015402 七、申請專利範圍: 1. 一種顯示裝置(100、300、500),其包括: 接觸式發幕器件(1〇6、306、406、506),用於接收 由物件在各自位置接觸該接觸式螢幕器件(1〇6、3〇6、 406、506)所致之多個接觸事件,該等物件(111、112、 210、211、220、221、311、312、320、321、331、332) 被为組為歸屬相同之物件的多個組群,及 一控制器(101,301,501),其經建構用於 4貞測在該接觸式螢幕器件(1〇6、306、406、506)上該 等接觸事件之位置的移動, 決定位置之一子集之該移動是否屬於一預定姿態,該 預定姿態指示與位置之該子集相關之該等物件(丨丨j、 112、210、211、220、221、311、312、320、321、 331、332)屬於該多個組群之一者, 將位置之該子集之該移動被決定係屬該預定姿態之該 等接觸事件指派至該多個組群之該相關者。 2.如請求項1之顯示裝置(1〇〇、300、500),其中該第一預 定姿態係以下姿態之一者:繞一單一點(H)旋轉該多個組 群之一者之該等物件(210、211)、轉移該多個組群之一 者之該等物件(220、221)、移動該多個組群之一者之該 等物件(210、211)朝向一單一點(C、P)或移動該多個組 群之一者之該等物件(311、312、320、321)遠離—單— 點(Q、T、Z)。 3·如請求項1或2之顯示裝置(100、300、500),其中被分組 142603.doc 201015402 為多個組群之該等物件(111、112、311、312)係多隻手 (110、120、310)之手指(111、112、311、312),且其中 該預定姿態係該手(110、120、3 10)之一移動。 4. 如請求項1或2之顯示裝置(1〇〇、300、500),其中被分組 為多個組群之該等物件(111、112、311、3 12)係多個使 用者之手指(111、112、311、312),且其中該預定姿態 係該等使用者之一隻手或兩隻手(丨1〇、12〇、310)之一移 動。 5. 如請求項1之顯示裝置(1〇〇、3〇〇、5〇〇),其中該控制器 (101,301,501)經進一步建構用於決定該多個組群之該相 關者之該等接觸事件之位置之該子集的該移動是否屬於 用於啟動一隨後動作之一進一步預定姿態。 6. 如清求項1之顯示裝置(3〇〇),其中該接觸式螢幕器件 (306)經進一步配置用於顯示一影像,且其中該控制器 (301)經進一步建構用於在該等接觸事件之位置產生顯示 一回饋指示(333)之該影像。 7. 如求項1之顯不裝置(3〇〇),其中該接觸式螢幕器件 ()、、、!進步配置用於顯示一影像,且其中該控制器經 進一步建構用於在可決定位置之該子集之該移動屬於一 預定姿態的情形下,產生顯示一回饋指示(323)之該影 像。. 8. 如請求項7之顯示裝置(3〇〇),其中該回饋指示⑴2、 323)包括在被指派至該多個组群之該相關者之該等接觸 事件之該等位置之一指示(322)。 142603.doc 201015402 9‘如請求項!之顯示裝置(100、5〇〇),其中該控制器(3〇ι 501)包括: 一姿態儲存單元(102、502) ’用於儲存該預定姿態, 一移動偵測單元(103、503),用於偵測該接觸式螢幕 器件上之該等接觸事件之位置的移動, 一決定單元(104、504),用於決定位置之該子集之該 移動是否屬於一預定姿態,及 φ 一指派單元(105、505) ’用於將所偵測到之位置之該 子集之該移動係屬該預定姿態之該等接觸事件指派至該 多個組群之該相關者。 10. —種電子器件,其包括如請求項1之顯示裝置。 11. 一種用於處理接觸事件的方法,其包括: 接收(601)由物件(111、112、210、211、220、221、 311、312、320、321、331、332)在各自位置接觸一螢幕 器件(106、306、406、506)所致之該多個接觸事件’該 φ 等物件被分組為歸屬相同之物件的多個組群,及 偵測(602)在該接觸式螢幕器件(1〇6、306、406、506) 上該等接觸事件之位置的移動, 決定(603)位置之一子集之該移動是否屬於一預定姿 態’該預定姿態指不與位置之該子集相關之該等物件 (111、112、210、211、220、221、311、312、320、 321、331、332)屬於該多個組群之一者,及 將所偵測到之位置之該子集之該移動係屬該預定姿態 之該等接觸事件指派(604)至該多個組群之該相關者。 142603.doc 201015402 12.如請求項11之方法’進一步包括以下步驟: 決定(605)該多個組群之該相關者之該等接觸事件之位 置之該子集的該移動是否屬於一進一步預定姿態;及 若該多個組群之該相關者之該等接觸事件之位置之該 子集的該移動屬於一進一步預定姿態,則啟動(606)—隨 後動作。 142603.doc201015402 VII. Patent application scope: 1. A display device (100, 300, 500) comprising: a contact type curtain device (1〇6, 306, 406, 506) for receiving contact with an object at a respective position Multiple contact events caused by contact screen devices (1〇6, 3〇6, 406, 506), such objects (111, 112, 210, 211, 220, 221, 311, 312, 320, 321, 331 And 332) a plurality of groups belonging to the same object, and a controller (101, 301, 501) configured to be used in the touch screen device (1, 6, 306, 406, 506) a movement of the position of the contact events to determine whether the movement of a subset of the locations belongs to a predetermined pose indicating the objects associated with the subset of locations (丨丨j, 112) , 210, 211, 220, 221, 311, 312, 320, 321, 331, 332) belonging to one of the plurality of groups, the movement of the subset of positions being determined to be the predetermined posture A contact event is assigned to the relevant person of the plurality of groups. 2. The display device (1, 300, 500) of claim 1, wherein the first predetermined posture is one of the following postures: rotating one of the plurality of groups around a single point (H) And the objects (210, 211), the objects (220, 221) that transfer one of the plurality of groups, and the objects (210, 211) that move one of the plurality of groups toward a single point ( The objects (311, 312, 320, 321) of C, P) or one of the plurality of groups are moved away from the single-point (Q, T, Z). 3. The display device (100, 300, 500) of claim 1 or 2, wherein the objects (111, 112, 311, 312) grouped by the group 142603.doc 201015402 are a plurality of hands (110) , 120, 310) fingers (111, 112, 311, 312), and wherein the predetermined gesture is one of the hands (110, 120, 3 10) moving. 4. The display device (1, 300, 500) of claim 1 or 2, wherein the objects (111, 112, 311, 3 12) grouped into a plurality of groups are fingers of a plurality of users (111, 112, 311, 312), and wherein the predetermined posture is one of the hands or hands of one of the users (丨1〇, 12〇, 310). 5. The display device (1, 3, 5, 请求) of claim 1, wherein the controller (101, 301, 501) is further configured to determine the related person of the plurality of groups Whether the movement of the subset of locations of the contact events belongs to a further predetermined gesture for initiating a subsequent action. 6. The display device (3) of claim 1, wherein the touch screen device (306) is further configured to display an image, and wherein the controller (301) is further configured for use in the The location of the contact event produces the image showing a feedback indication (333). 7. The display device of claim 1 (3), wherein the touch screen device (), ,,! is configured to display an image, and wherein the controller is further configured for use in a determinable position In the event that the movement of the subset belongs to a predetermined pose, the image displaying a feedback indication (323) is generated. 8. The display device (3) of claim 7, wherein the feedback indication (1)2, 323) is included in one of the locations of the contact events assigned to the correlator of the plurality of groups (322). 142603.doc 201015402 9 'As requested! The display device (100, 5〇〇), wherein the controller (3〇ι 501) comprises: a gesture storage unit (102, 502) 'for storing the predetermined posture, a motion detecting unit (103, 503) For detecting the movement of the position of the contact events on the touch screen device, a determining unit (104, 504) for determining whether the movement of the subset of positions belongs to a predetermined posture, and φ The assigning unit (105, 505) 'is assigning the contact events of the subset of the detected locations to the predetermined pose to the correlator of the plurality of groups. 10. An electronic device comprising the display device of claim 1. 11. A method for processing a contact event, comprising: receiving (601) contacting, by an object (111, 112, 210, 211, 220, 221, 311, 312, 320, 321, 331, 332) at a respective location The plurality of contact events caused by the screen device (106, 306, 406, 506) are grouped into a plurality of groups belonging to the same object, and detected (602) in the contact screen device ( 1〇6, 306, 406, 506) the movement of the position of the contact events determines whether the movement of a subset of the (603) position belongs to a predetermined posture. The predetermined posture is not related to the subset of positions. The objects (111, 112, 210, 211, 220, 221, 311, 312, 320, 321, 331, 332) belong to one of the plurality of groups, and the sub-position of the detected location The mobile event is assigned (604) to the relevant one of the plurality of groups. 142603.doc 201015402 12. The method of claim 11 further comprising the step of: determining (605) whether the movement of the subset of locations of the contact events of the plurality of groups of the plurality of groups belongs to a further predetermined Gesture; and if the movement of the subset of locations of the contact events of the plurality of groups belongs to a further predetermined pose, then (606) - subsequent actions are initiated. 142603.doc
TW98130594A 2008-09-12 2009-09-10 Display apparatus for processing touch events TW201015402A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP08164268 2008-09-12

Publications (1)

Publication Number Publication Date
TW201015402A true TW201015402A (en) 2010-04-16

Family

ID=41683077

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98130594A TW201015402A (en) 2008-09-12 2009-09-10 Display apparatus for processing touch events

Country Status (2)

Country Link
TW (1) TW201015402A (en)
WO (1) WO2010029491A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446586A (en) * 2014-09-19 2016-03-30 三星电子株式会社 Display apparatus and method for controlling the same

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5716502B2 (en) * 2011-04-06 2015-05-13 ソニー株式会社 Information processing apparatus, information processing method, and computer program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6570557B1 (en) * 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US7193608B2 (en) * 2003-05-27 2007-03-20 York University Collaborative pointing devices
US9063647B2 (en) * 2006-05-12 2015-06-23 Microsoft Technology Licensing, Llc Multi-touch uses, gestures, and implementation
US8970502B2 (en) * 2006-05-26 2015-03-03 Touchtable Ab User identification for multi-user touch screens
WO2008014819A1 (en) * 2006-08-02 2008-02-07 Alterface S.A. Multi-user pointing apparatus and method
US20080065619A1 (en) * 2006-09-07 2008-03-13 Bhogal Kulvir S Method, system, and program product for sharing collaborative data among a plurality of authors

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446586A (en) * 2014-09-19 2016-03-30 三星电子株式会社 Display apparatus and method for controlling the same

Also Published As

Publication number Publication date
WO2010029491A2 (en) 2010-03-18
WO2010029491A3 (en) 2010-07-01

Similar Documents

Publication Publication Date Title
EP3461291B1 (en) Implementation of a biometric enrollment user interface
CN102253797B (en) Touch event model
US20120235926A1 (en) Handheld devices and related data transmission methods
EP2610726B1 (en) Drag and drop operation in a graphical user interface with highlight of target objects
US20120262386A1 (en) Touch based user interface device and method
US20110248927A1 (en) Multi-mode touchscreen user interface for a multi-state touchscreen device
US10180714B1 (en) Two-handed multi-stroke marking menus for multi-touch devices
KR20130061711A (en) Apparatus and method for transferring information items between communications devices
JP7233109B2 (en) Touch-sensitive surface-display input method, electronic device, input control method and system with tactile-visual technology
CN104364734A (en) Remote session control using multi-touch inputs
Heo et al. Expanding touch input vocabulary by using consecutive distant taps
US10402080B2 (en) Information processing apparatus recognizing instruction by touch input, control method thereof, and storage medium
WO2016050086A1 (en) Interaction method for user interfaces
Heo et al. ForceDrag: using pressure as a touch input modifier
Surale et al. Experimental analysis of mode switching techniques in touch-based user interfaces
CN202110523U (en) Terminal equipment and icon position interchanging device of terminal equipment
US20130234997A1 (en) Input processing apparatus, input processing program, and input processing method
CN202133989U (en) Terminal unit and icon position exchanging device thereof
KR20160097410A (en) Method of providing touchless input interface based on gesture recognition and the apparatus applied thereto
US20140298275A1 (en) Method for recognizing input gestures
CN105474164B (en) The ambiguity inputted indirectly is eliminated
JP2014056487A (en) Information processing apparatus, and control method and program of the same
JP2014164695A (en) Data processing device and program
TW201015402A (en) Display apparatus for processing touch events
US20180059806A1 (en) Information processing device, input control method for controlling input to information processing device, and computer-readable storage medium storing program for causing information processing device to perform input control method