TWI516989B - Three-dimensional user interface device - Google Patents

Three-dimensional user interface device Download PDF

Info

Publication number
TWI516989B
TWI516989B TW102146395A TW102146395A TWI516989B TW I516989 B TWI516989 B TW I516989B TW 102146395 A TW102146395 A TW 102146395A TW 102146395 A TW102146395 A TW 102146395A TW I516989 B TWI516989 B TW I516989B
Authority
TW
Taiwan
Prior art keywords
display
user
logic analyzer
array
proximity sensor
Prior art date
Application number
TW102146395A
Other languages
Chinese (zh)
Other versions
TW201435664A (en
Inventor
潘聖雄
Original Assignee
英特爾股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英特爾股份有限公司 filed Critical 英特爾股份有限公司
Publication of TW201435664A publication Critical patent/TW201435664A/en
Application granted granted Critical
Publication of TWI516989B publication Critical patent/TWI516989B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Description

三維度使用者界面裝置 Three-dimensional user interface device

本發明係有關於一種三維度使用者界面裝置。 The present invention relates to a three-dimensional user interface device.

使用者用各種不同的方式來和消費者電子裝置(CED)互動。一使用者可使用遙控制裝置來和一CED互動,該遙控裝置沒有被連接至該CED但和該CED通信(communicate)。例如,該遙控裝置可以是一遙控器、一無線滑鼠、一無線鍵盤、或一遊戲控制器。使用者可以和是該CED的一部分的裝置或連接至該CED的裝置(例如,鍵盤、觸控板)互動。該CED可包括一觸碰敏感的顯示器,使得該使用者可藉由觸碰該顯示器來和該CED互動。許多使用者界面是二維度,這侷限了可以和該CED互動的類型。 Users interact with consumer electronic devices (CEDs) in a variety of different ways. A user can use a telecontrol device to interact with a CED that is not connected to the CED but is in communication with the CED. For example, the remote control device can be a remote control, a wireless mouse, a wireless keyboard, or a game controller. The user can interact with a device that is part of the CED or a device that is connected to the CED (eg, a keyboard, a touchpad). The CED can include a touch sensitive display such that the user can interact with the CED by touching the display. Many user interfaces are two-dimensional, which limits the types of interactions that can be made with the CED.

本發明提供一種設備,其包含:一第一本 體,其支撐一顯示器及一第一陣列的近接感測器;及一第二本體,其從該第一本體的一邊緣延伸出,其中該第二本體支撐一第二陣列的近接感測器,其中該第一陣列的近接感測器及該第二陣列的近接感測器係用來界定一用於該設備的三維度(3D)使用者界面。 The invention provides an apparatus comprising: a first a body supporting a display and a first array of proximity sensors; and a second body extending from an edge of the first body, wherein the second body supports a second array of proximity sensors The proximity sensor of the first array and the proximity sensor of the second array are used to define a three-dimensional (3D) user interface for the device.

100‧‧‧膝上型電腦 100‧‧‧ Laptop

110‧‧‧蓋子 110‧‧‧ cover

150‧‧‧基座 150‧‧‧Base

120‧‧‧顯示器 120‧‧‧ display

160‧‧‧鍵盤 160‧‧‧ keyboard

170‧‧‧觸控板 170‧‧‧ Trackpad

200‧‧‧膝上型電腦 200‧‧‧ Laptop

210‧‧‧近接感測器 210‧‧‧ proximity sensor

220‧‧‧顯示器 220‧‧‧ display

230‧‧‧使用者界面(鍵盤) 230‧‧‧User Interface (Keyboard)

340‧‧‧覆蓋區 340‧‧ ‧ coverage area

410‧‧‧被界定的區域 410‧‧‧Defined area

410A‧‧‧被界定的區域 410A‧‧‧Defined area

410B‧‧‧被界定的區域 410B‧‧‧Defined area

410C‧‧‧被界定的區域 410C‧‧‧Defined area

410D‧‧‧被界定的區域 410D‧‧‧Defined area

500‧‧‧膝上型電腦 500‧‧‧ Laptop

510‧‧‧顯示器 510‧‧‧ display

520‧‧‧近接感測器 520‧‧‧ proximity sensor

530‧‧‧邏輯分析器 530‧‧‧Logical Analyzer

各式實施例的特徵及好處從下面的詳細描述中將變得明顯,其中:圖1A-C例示一示範性膝上型電腦的各種視圖;圖2例示一依據一實施例的例示性膝上型電腦,其使用在蓋子(顯示器)及在基座(使用者界面)內的近接感測器;圖3例示一依據一實施例之由該等近接感測器所產生之用於該膝上型電腦的示範性覆蓋區;圖4例示一依據一實施例之在覆蓋區內的示範性被界定的區域(如,盒子)的產生;及圖5例示一依據一實施例之用來提供一用於膝上型電腦的3D使用者界面的示範性系統圖。 The features and advantages of the various embodiments will be apparent from the following detailed description, wherein: FIGS. 1A-C illustrate various views of an exemplary laptop; FIG. 2 illustrates an exemplary lap in accordance with an embodiment. a computer for use in a cover (display) and a proximity sensor in the base (user interface); FIG. 3 illustrates an application for the laptop generated by the proximity sensors in accordance with an embodiment An exemplary footprint of a computer; FIG. 4 illustrates the generation of an exemplary defined area (eg, a box) within a coverage area in accordance with an embodiment; and FIG. 5 illustrates an embodiment for providing a An exemplary system diagram for a 3D user interface for a laptop.

圖1A-C例示一示範性膝上型電腦100的各種視圖。該電腦100包括一上框架(蓋子)110及一下框架(基座)150,它們透過一鉸鏈或類此者(其未被標示號 碼)而彼此可樞轉地連接。該電腦100可變換於該蓋子110從該基座150延伸於一向上的方向上的打開的形態和該蓋子110平置於該基座150之上的閉合的形態之間。該蓋子110可包括一顯示器120,內容可在該顯示器被使用者觀看。該基座150可包括一或多個和該電腦100互動的使用者界面。該等使用者界面例如可包括一鍵盤160及一觸控板170。當該電腦100是可操作的時候,它可以是在該打開的形態(參見圖1A及1C)及當該電腦是關機及/或被運送時,它可以是在該閉合的形態(參見圖1B)。 1A-C illustrate various views of an exemplary laptop 100. The computer 100 includes an upper frame (lid) 110 and a lower frame (base) 150 that pass through a hinge or the like (they are not marked Code) and pivotally connected to each other. The computer 100 can be changed between an open configuration in which the cover 110 extends from the base 150 in an upward direction and a closed configuration in which the cover 110 lies flat on the base 150. The cover 110 can include a display 120 at which content can be viewed by a user. The base 150 can include one or more user interfaces that interact with the computer 100. The user interfaces can include, for example, a keyboard 160 and a touch pad 170. When the computer 100 is operational, it may be in the open configuration (see Figures 1A and 1C) and when the computer is turned off and/or shipped, it may be in the closed configuration (see Figure 1B). ).

當該電腦100是可操作的時候,使用者透過該等使用者界面和該電腦100互動。該鍵盤160可讓使用者例如使用按鍵來輸入資料及/或選取某些參數。該觸控板170可讓使用者例如藉由在該觸板上移動他們的手指(例如,被偵測到的手指運動可被圖映(mapped)至螢幕運動)來在該顯示器120上四處捲動,用以觀看及/或選取某些內容。不論是使用鍵盤160、觸控板170或其它使用者界面裝置(如,滑鼠),該互動都侷限於二維度(2D)的互動(該使用者互動被侷限於該顯示器120的平面上)。例如,使用者可左、右、上、下地移或這些方向的組合(譬如,對角線地移動)。 When the computer 100 is operational, the user interacts with the computer 100 through the user interfaces. The keyboard 160 allows the user to enter data and/or select certain parameters, for example, using a button. The touchpad 170 allows a user to scroll around the display 120, for example by moving their finger on the touchpad (eg, the detected finger motion can be mapped to screen motion). Move to view and/or select certain content. Whether using keyboard 160, trackpad 170, or other user interface device (eg, a mouse), the interaction is limited to two-dimensional (2D) interaction (the user interaction is limited to the plane of display 120) . For example, the user can move left, right, up, down, or a combination of these directions (eg, diagonally).

將該顯示器120作成和用在平板電腦上的觸控螢幕顯示器相類似的觸控螢幕可提供額外的使用者界面選項。然而,此一裝置仍只是提供2D使用者界面(使用者只能在該顯示器的平面內和該顯示器互動的裝置)。 The display 120 is configured to provide an additional user interface option similar to a touch screen display for use with a touch screen display on a tablet. However, this device still only provides a 2D user interface (a device that the user can only interact with the display in the plane of the display).

近接感測器可偵測一物體是否和它離一特定的距離。例如,一被使用在電腦上的近接感測器可偵測一個人是否是在它的一正常操作距離(如,3英呎)之內。然而,該近接感測器無法判定該人類和該電腦相隔的確實距離(如,1英呎之於3英呎)。該等近接感測器例如可以是感應式感測器、電容式感測器、磁性感測器、光電式感測器、其它形式感測器或它們的一些組合。光電式感測器可包括一光源(如,紅外線光)及一接收器以判定該光線是否被反射回來。 The proximity sensor can detect if an object is at a specific distance from it. For example, a proximity sensor used on a computer can detect if a person is within a normal operating distance (eg, 3 inches) of it. However, the proximity sensor cannot determine the exact distance between the human and the computer (eg, 1 inch to 3 inches). The proximity sensors can be, for example, inductive sensors, capacitive sensors, magnetic sensors, photoelectric sensors, other forms of sensors, or some combination thereof. The photoelectric sensor can include a light source (eg, infrared light) and a receiver to determine whether the light is reflected back.

近接感測器可被使用在該顯示器120內,用以在使用者無需接觸到該顯示器120或使用到該等使用者界面裝置(如,鍵盤160、觸控板170)之下偵測該使用者(或該使用者的特定部位,譬如手或手指)相對於該顯示器120的位置及/或運動。然而,為了要選取某些內容或採取某些動作,該使用者會需要觸碰該顯示器120及/或使用該等使用者界面裝置(如,鍵盤160、觸控板170)。亦即,和此一裝置相界接(interfacing)仍然侷限於2D的互動。 A proximity sensor can be used in the display 120 to detect the user's use without touching the display 120 or using the user interface device (eg, keyboard 160, trackpad 170). The position and/or movement of a particular portion of the user, such as a hand or finger, relative to the display 120. However, in order to select certain content or take certain actions, the user may need to touch the display 120 and/or use the user interface devices (eg, keyboard 160, trackpad 170). That is, interfacing with this device is still limited to 2D interaction.

圖2例示一示範性的膝上型電腦200,其在蓋子(顯示器)220及基座(使用者界面)230內使用近接感測器210。該等近接感測器210例如可以是感應式感測器、電容式感測器、磁性感測器、光電式感測器、其它形式感測器或它們的一些組合。該等近接感測器210被例示為可以看到,以方便說明。但是該等近接感測器210可為 使用者看不到及/或使用者不會注意到。該等近接感測器210不會影響到呈現在該顯示器220上的內容且不會影響到使用該使用者界面(鍵盤)330的使用者。 2 illustrates an exemplary laptop 200 that uses proximity sensor 210 within a cover (display) 220 and a base (user interface) 230. The proximity sensors 210 can be, for example, inductive sensors, capacitive sensors, magnetic sensors, photoelectric sensors, other forms of sensors, or some combination thereof. The proximity sensors 210 are illustrated as being viewable for ease of illustration. But the proximity sensors 210 can be The user does not see and/or the user does not notice. The proximity sensors 210 do not affect the content presented on the display 220 and do not affect the user using the user interface (keyboard) 330.

該等近接感測器210可被組織成陣列,其包括感測器列及感測器行210,它們延伸橫跨該顯示器220的長度及高度以及該使用者界面(鍵盤)230的長度及深度。在該顯示器220上的感測器210的行可和該鍵盤230上的感測器210的行對齊。利用兩個分開的感測器210的平面可讓電腦200不只能夠偵測該使用者(或該使用者的特定部位,譬如手或手指)或裝置相對於該顯示器220的位置及/或運動,還能夠偵測離該顯示器的距離。 The proximity sensors 210 can be organized into an array comprising a sensor column and a sensor row 210 extending across the length and height of the display 220 and the length and depth of the user interface (keyboard) 230 . The rows of sensors 210 on the display 220 can be aligned with the rows of sensors 210 on the keyboard 230. Utilizing the plane of the two separate sensors 210 allows the computer 200 to detect not only the user (or a particular portion of the user, such as a hand or finger) or the position and/or movement of the device relative to the display 220, It is also capable of detecting the distance from the display.

圖3例示一為了該膝上型電腦200所產生的一示範性覆蓋區340。在該顯示器220上的該等近接感測器210能夠偵測和該鍵盤230從該顯示器220延伸出的距離大致相等或距離稍大的物體。在鍵盤230上的該等近接感測器210能夠偵測和該鍵盤相距一和該顯示器220從該鍵盤230延伸出的距離大致相等或稍大的距離的物體。該覆蓋區340可以是在該顯示器220上的感測器210的覆蓋範圍和該鍵盤230上的感測器210的覆蓋範圍相重疊的區域。亦即,該覆蓋區340可向上延伸至和該顯示器220一樣高,向外延伸至和該鍵盤230一樣遠,並涵蓋該顯示器220及鍵盤230的長度。 FIG. 3 illustrates an exemplary footprint 340 for the laptop 200. The proximity sensors 210 on the display 220 are capable of detecting objects that are substantially equal or slightly larger than the distance that the keyboard 230 extends from the display 220. The proximity sensors 210 on the keyboard 230 are capable of detecting objects that are at a distance equal to or slightly greater than the distance that the display 220 extends from the keyboard 230. The footprint 340 can be an area where the coverage of the sensor 210 on the display 220 overlaps with the coverage of the sensor 210 on the keyboard 230. That is, the footprint 340 can extend upwardly as high as the display 220, extending outwardly as far as the keyboard 230, and encompassing the length of the display 220 and keyboard 230.

圖4例示該覆蓋區340內示範性界定的區域(如,盒子)410的產生。用於顯示器(未示出)的該等 近接感測器(未被示出)可被定向在x軸(顯示器的長度)及z軸(顯示器的高度)上。用於鍵盤(未示出)的該等近接感測器可被定向在x軸(鍵盤的長度)及y軸(鍵盤的深度)上。用於顯示器及鍵盤之在x軸上的感測器可彼此對齊。每一個被界定的區域可以是以每一軸的一近接感測器的一交會點為中心的周圍區域。該區域的大小可根據該等感測器的數量及該等感測器彼此間的鄰近程度來決定。如圖所示,每一軸有三個被界定的區域(如,x=1-3,y=1-3,及z=1-3),其顯示有三個感測器和每一軸相關連。因此,該顯示器及該鍵盤每一者可具有9個(3×3)感測器和其相關連。總共有27個(3×3×3)被界定的區域。 FIG. 4 illustrates the generation of an exemplary defined area (eg, box) 410 within the footprint 340. Such for a display (not shown) A proximity sensor (not shown) can be oriented on the x-axis (length of the display) and the z-axis (height of the display). The proximity sensors for a keyboard (not shown) can be oriented on the x-axis (the length of the keyboard) and the y-axis (the depth of the keyboard). The sensors on the x-axis for the display and keyboard can be aligned with one another. Each defined area may be a surrounding area centered at a point of intersection of a proximity sensor of each axis. The size of the region can be determined based on the number of such sensors and the proximity of the sensors to each other. As shown, each axis has three defined regions (e.g., x = 1-3, y = 1-3, and z = 1-3) that show three sensors associated with each axis. Thus, the display and the keyboard each can have nine (3 x 3) sensors and their associated. There are a total of 27 (3 × 3 × 3) defined areas.

舉例而言,一被界定的區域410A可被界定在以位於x=1(第一個對齊的行),y=2(鍵盤上的第二列)及z=2(顯示器上的第二列)的近接感測器的交會點為中心的周圍區域。一被界定的區域410B可包括位在x=3(第三個對齊的行),y=1(鍵盤上的第一列)及z=3(顯示器上的第三列)的近接感測器。一被界定的區域410C可包括位在x=2(第二個對齊的行),y=2(鍵盤上的第二列)及z=3(顯示器上的第三列)的近接感測器。一被界定的區域410D可包括位在x=3(第三個對齊的行),y=3(鍵盤上的第三列)及z=1(顯示器上的第一列)的近接感測器。 For example, a defined region 410A can be defined to be located at x=1 (the first aligned row), y=2 (the second column on the keyboard), and z=2 (the second column on the display) The intersection of the proximity sensor is centered around the area. A defined region 410B may include a proximity sensor positioned at x=3 (the third aligned row), y=1 (the first column on the keyboard), and z=3 (the third column on the display) . A defined area 410C may include proximity sensors positioned at x=2 (second aligned line), y=2 (second column on the keyboard), and z=3 (third column on the display) . A defined region 410D may include a proximity sensor positioned at x=3 (the third aligned row), y=3 (the third column on the keyboard), and z=1 (the first column on the display) .

如果和該被界定的區域相關連的三個近接感 測器顯示出有一物體(如,手指,數位筆(stylus))存在的話,則該電腦可判定一物體位在該被界定的區域內。該等被界定的區域可和出現在該顯示器的一相對應的區域上的內容相關連。依據一實施例,在x軸及z軸上的近接感測器可被用來界定在該顯示器上的區域。例如,一用於一特定程式的圖像(icon)可和一被界定的區域相關連。如果一手指、手或其它裝置被判定是在該被界定的區域內的話,則該圖像可被照亮且如果該使用者想要選取該圖像的話,則他可實施一朝向該顯示器的運動來選取該圖像。 If there are three proximitys associated with the defined area The detector indicates that an object (eg, a finger, a stylus) is present, and the computer can determine that an object is within the defined area. The defined regions may be associated with content that appears on a corresponding area of the display. According to an embodiment, proximity sensors on the x-axis and z-axis can be used to define an area on the display. For example, an icon for a particular program can be associated with a defined area. If a finger, hand or other device is determined to be within the defined area, the image can be illuminated and if the user wants to select the image, he can implement a display toward the display Move to select the image.

依據一實施例,在三個軸上的該等近接感測器可被用來將該顯示器上的項目指定給該等被界定的區域。例如,如果該顯示器呈現出一3D桌面(desktop)的話,則在該桌面上的許多項目可和該等不同的被界定的區域相關連。例如,在該顯示器的右上角的項目可和在行x=3及列z=3上的近接感測器相關連。一在該3D桌面的右上角最靠近使用者的項目(如,三個重疊的圖像的第一個圖像)可和列y=3相關連,一離使用者中等距離的項目(如,三個重疊的圖像的第二個圖像)可和列y=2相關連,及一離使用者最遠距離的項目(如,三個重疊的圖像的第三個圖像)可和列y=1相關連。如果一手指、手或其它裝置被判定出現在該被界定的區域內的話,則該圖像可被照亮。如果該使用者想要選取該圖像的話,則他可實施一被界定的運動(如,朝向該顯示器的手勢)來選取該圖像。 According to an embodiment, the proximity sensors on the three axes can be used to assign items on the display to the defined regions. For example, if the display presents a 3D desktop, then many items on the desktop can be associated with the different defined areas. For example, an item in the upper right corner of the display can be associated with a proximity sensor on row x=3 and column z=3. An item closest to the user in the upper right corner of the 3D desktop (eg, the first image of three overlapping images) may be associated with column y=3, an item that is intermediate from the user (eg, The second image of the three overlapping images can be associated with column y=2, and an item that is furthest away from the user (eg, the third image of three overlapping images) can be Column y=1 is associated. The image may be illuminated if a finger, hand or other device is determined to be present in the defined area. If the user wants to select the image, he can implement a defined motion (e.g., a gesture toward the display) to select the image.

除了從該桌面選取圖像之外,該等近接感測器可被用來追蹤和該顯示器有關的運動,其類似於觸控螢幕及/或觸控板所用的方式類似,但無需實際和該裝置互動。例如,如果使用者想要在他正在觀看的顯示器上的一書本上翻頁的話,他可以從右至左揮動他的手指來往前進一頁或從左至右揮動手指來往回退一頁。依據使用者的手指(手、裝置或類此者)的運動所採取的動作端視該電腦的操作模式及可在該電腦上運作的任何應用程式而定。 In addition to selecting images from the desktop, the proximity sensors can be used to track motion associated with the display, similar to that used for touch screens and/or touchpads, but without the need for actual and Device interaction. For example, if a user wants to flip a page on a book on the display he is watching, he can wave his finger from right to left to advance one page or swipe his finger from left to right to go back one page. The action taken in accordance with the movement of the user's finger (hand, device or the like) depends on the mode of operation of the computer and any application that can operate on the computer.

硬體及/或軟體處理可被用來分析來自該等近接感測器的資料,用以判定該裝置(如,手指、手、棒桿)相對於該顯示器的位置及運動。該等近接感測器可提供資料至一處理器以分析該資料,用以偵測及/或辨識在該覆蓋區340內的運動及/或物體。如上文中提到的,根據該偵測/辨識所採取的動作端視該電腦的操作現況而定。 Hardware and/or software processing can be used to analyze data from the proximity sensors to determine the position and motion of the device (e.g., fingers, hands, sticks) relative to the display. The proximity sensors can provide data to a processor for analyzing the data for detecting and/or identifying motion and/or objects within the coverage area 340. As mentioned above, the action taken based on the detection/identification depends on the current state of operation of the computer.

依據一實施例,使用近接感測器可讓該電腦如一3D掃描器般作用。處理可根據被判定為包含該物體的該等被界定的區域來實施,用以獲得一對於該物體的大小及形狀的概念。應指出的是,該被產生的3D影像會被侷限於該物體面向該顯示器及該鍵盤的表面。 According to an embodiment, the proximity sensor can be used to make the computer function as a 3D scanner. Processing may be performed in accordance with the defined regions that are determined to contain the object to obtain a concept of the size and shape of the object. It should be noted that the resulting 3D image will be limited to the surface of the object facing the display and the keyboard.

依據一實施例中,一3D影像的產生可被用於真偽鑑定(authentication)及/或存取的保全措施。例如,當一使用者嘗試要登入該電腦時,該電腦可利用該等近接感測器來例如產生該使用者的手或臉的3D影像。該 被產生的3D影像可被拿來和一經過真偽鑑定的3D影像相比較以判定該存取是否應被准許。例如,使用者的手指通過該覆蓋區的特定運動亦可被用作為真偽鑑定及/或存取的保全措施。例如,為了要鑑定一使用者,該被偵測到的使用者的手指的運動可被拿來和被儲存的運動相比較。真實的動作例如可以是手指從該覆蓋區的左上前方部分到右下後方部分的揮動。 According to an embodiment, the generation of a 3D image can be used for security measures of authentication and/or access. For example, when a user attempts to log into the computer, the computer can utilize the proximity sensors to, for example, generate a 3D image of the user's hand or face. The The generated 3D image can be compared to a 3D image that has been authenticated to determine if the access should be granted. For example, a particular movement of a user's finger through the coverage area may also be used as a safeguard for authenticity identification and/or access. For example, to identify a user, the motion of the detected user's finger can be compared to the stored motion. The actual action may be, for example, a swipe of a finger from the upper left front portion to the lower right rear portion of the coverage area.

熟習此技藝者將可瞭解的是,所使用的近接感測器愈多,該3D使用者界面的粒度就愈精細。因此,在3D掃描及/或使用者鑑定的近接感測器的使用上,每一區域可能需要至少一定數量的感測器。 It will be appreciated by those skilled in the art that the more proximity sensors are used, the finer the granularity of the 3D user interface. Thus, at least a certain number of sensors may be required for each region in the use of 3D scanning and/or user-identified proximity sensors.

圖5例示一用於一提供3D使用者界面的膝上型電腦500的一示範性系統圖。該電腦500可包括一顯示器510、多個近接感測器520、及邏輯分析器(logic)530。該顯示器510是用來呈現資訊。該等近接感測器520是用來偵測和它相關的物體(如,使用者的手指、使用者的手、數位筆)的存在。該邏輯分析器530是用來處理來自該等近接感測器的輸入。該邏輯分析器530可以是硬體及/或軟體邏輯分析器。該邏輯分析器530可以是一個或多個被使用於該電腦500內的處理器。 FIG. 5 illustrates an exemplary system diagram for a laptop 500 that provides a 3D user interface. The computer 500 can include a display 510, a plurality of proximity sensors 520, and a logic 530. The display 510 is used to present information. The proximity sensors 520 are used to detect the presence of objects associated with it (eg, the user's finger, the user's hand, the digital pen). The logic analyzer 530 is for processing inputs from the proximity sensors. The logic analyzer 530 can be a hardware and/or software logic analyzer. The logic analyzer 530 can be one or more processors that are used within the computer 500.

該邏輯分析器530可被建構來界定多個和該等近接感測器的交會點相對應的覆蓋區。該邏輯分析器530可被建構來指出一物體在一覆蓋區內的存在以回應和該覆蓋區相關連的近接感測器的偵測。該邏輯分析器530 可被建構來允許使用者藉由將一物體放在一相對應的覆蓋區內來指認一在該顯示器510上的項目以用於選取。該邏輯分析器530可被建構來將該被指認的項目反白以用於選取。該邏輯分析器530可被建構來選取該項目以回應該物體朝向該顯示器的運動。該邏輯分析器530可被建構來指認該顯示器上一將被採取的作動以回應一物體通過該等覆蓋區域中的一或多個覆蓋區域的運動的偵測。該邏輯分析器530可被建構來產生一物體的一3D影像以回應該物體在一或多個覆蓋區域內的偵測。該邏輯分析器530可被建構來利用該3D影像實施真偽鑑定。 The logic analyzer 530 can be configured to define a plurality of coverage areas corresponding to intersections of the proximity sensors. The logic analyzer 530 can be configured to indicate the presence of an object within a coverage area in response to detection of a proximity sensor associated with the coverage area. The logic analyzer 530 It can be constructed to allow a user to identify an item on the display 510 for selection by placing an object within a corresponding coverage area. The logic analyzer 530 can be constructed to highlight the identified item for selection. The logic analyzer 530 can be configured to select the item to echo the motion of the object toward the display. The logic analyzer 530 can be configured to identify an action to be taken on the display in response to detection of motion of an object through one or more coverage areas in the coverage areas. The logic analyzer 530 can be constructed to generate a 3D image of an object to detect the detection of the object within one or more coverage areas. The logic analyzer 530 can be configured to perform authenticity identification using the 3D image.

該3D使用者界面已藉由特別參考膝上型電腦來加以描述,但並不侷限於此。相反地,該3D界面可和任何包括一顯示器及一從該顯示器延伸出的鍵盤的設備一起被使用,譬如桌上型電腦、某些具有嵌合式鍵盤的平板電腦(譬如,由Microsoft®販售的SurfaceTM)、某些無線電話、及/或某些個人數位助理(PDA)及類此者。再者,該第二表面並不侷限於鍵盤。相反地,該其它表面可以是從一顯示器延伸出之任何類型的使用者界面或裝置。例如,該第二表面可以是另一顯示器或可以是一用於該裝置的蓋子,該裝置在該蓋子被打開時向外延伸出。例如,一3D界面可藉由利用一用於該裝置的護蓋而被提供給一平板電腦。在該護蓋內的該等近接感測器會需要提供資料給該電腦,所以會需要某些類型的通信介面。 The 3D user interface has been described with particular reference to a laptop, but is not limited thereto. Conversely, the 3D interface can be used with any device that includes a display and a keyboard extending from the display, such as a desktop computer, some tablets with a mating keyboard (for example, sold by Microsoft®). Surface TM ), some wireless phones, and/or some personal digital assistants (PDAs) and the like. Furthermore, the second surface is not limited to the keyboard. Conversely, the other surface can be any type of user interface or device that extends from a display. For example, the second surface can be another display or can be a cover for the device that extends outwardly when the cover is opened. For example, a 3D interface can be provided to a tablet computer by utilizing a cover for the device. The proximity sensors within the cover may need to provide information to the computer, so some type of communication interface may be required.

雖然該揭露內容已藉由參照特殊的實施例而 被例示,但很明顯的是,該揭露內容並不侷限於該等實施例,因為可在不偏離該範圍下對該等實施例實施各種變化及修改。“一個實施例”或“一實施例”係指一被描述於本文中的特殊的特徵、結構或特性被包括在至少一個實施例中。因此,出現在整個說明書的不同地方的“一個實施例”或“一實施例”的用語並不必然都是指同一個實施例。 Although the disclosure has been made with reference to a particular embodiment It is to be understood that the disclosure is not limited to the embodiments, and various changes and modifications may be made to the embodiments without departing from the scope. "One embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described herein is included in at least one embodiment. Therefore, the terms "a" or "an" or "an"

各式實施例是打算要讓隨附的申請專利範圍的精神及範圍受最概括性的保護。 Various embodiments are intended to provide the most general protection of the spirit and scope of the appended claims.

200‧‧‧膝上型電腦 200‧‧‧ Laptop

210‧‧‧近接感測器 210‧‧‧ proximity sensor

220‧‧‧顯示器 220‧‧‧ display

230‧‧‧使用者界面(鍵盤) 230‧‧‧User Interface (Keyboard)

Claims (29)

一種設備,其包含:一第一本體,其支撐一顯示器及一第一陣列的近接感測器;一第二本體,其從該第一本體的一邊緣延伸出,其中該第二本體支撐一第二陣列的近接感測器,其中該第一陣列的近接感測器及該第二陣列的近接感測器係用來界定一用於該設備的三維度(3D)使用者界面;及邏輯分析器,它的至少一部分是硬體,其中該邏輯分析器被建構來將一介於該第一本體和該第二本體之間用於該3D使用者界面的3D區域界定為至少一對應於該等近接感測器的交會點的3D覆蓋區域。 An apparatus comprising: a first body supporting a display and a first array of proximity sensors; a second body extending from an edge of the first body, wherein the second body supports a a proximity sensor of the second array, wherein the proximity sensor of the first array and the proximity sensor of the second array are used to define a three-dimensional (3D) user interface for the device; and logic An analyzer, at least a portion of which is a hardware, wherein the logic analyzer is configured to define a 3D region between the first body and the second body for the 3D user interface as at least one corresponding to the The 3D coverage area of the intersection of the proximity sensor. 如申請專利範圍第1項的設備,其中該邏輯分析器被建構來指出一物體存在於該至少一3D覆蓋區域的一3D覆蓋區域內以回應和該3D覆蓋區域相關連的近接感測器的偵測。 The device of claim 1, wherein the logic analyzer is configured to indicate that an object is present in a 3D coverage area of the at least one 3D coverage area in response to a proximity sensor associated with the 3D coverage area Detection. 如申請專利範圍第2項的設備,其中該邏輯分析器被建構來允許一使用者藉由將一物體放在一相對應的3D覆蓋區內來指認在該顯示器上的一項目以用於選取。 The device of claim 2, wherein the logic analyzer is configured to allow a user to identify an item on the display for selection by placing an object within a corresponding 3D coverage area . 如申請專利範圍第3項的設備,其中該物體是使用者的手指。 The device of claim 3, wherein the object is a user's finger. 如申請專利範圍第3項的設備,其中該物體是使用者的手。 The device of claim 3, wherein the object is a user's hand. 如申請專利範圍第3項的設備,其中該物體是一 指示裝置。 For example, the device of claim 3, wherein the object is a Indicator device. 如申請專利範圍第3項的設備,其中該邏輯分析器被建構來將該被指認的項目反白以用於選取。 A device as claimed in claim 3, wherein the logic analyzer is constructed to highlight the designated item for selection. 如申請專利範圍第7項的設備,其中該邏輯分析器被建構來選取該項目以回應該物體朝向該顯示器的運動。 The device of claim 7, wherein the logic analyzer is configured to select the item to reflect the movement of the object toward the display. 如申請專利範圍第2項的設備,其中該邏輯分析器被建構來指認該顯示器上一將被採取的動作以回應一物體通過該至少一3D覆蓋區的一或多個3D覆蓋區的運動的偵測。 The device of claim 2, wherein the logic analyzer is configured to identify an action to be taken on the display in response to movement of an object through one or more 3D coverage areas of the at least one 3D coverage area Detection. 如申請專利範圍第1項的設備,其中該第一陣列的近接感測器包括多個被配置成列及成行的感測器;該第二陣列的近接感測器包括多個被配置成列及成行的感測器。 The device of claim 1, wherein the proximity sensor of the first array comprises a plurality of sensors configured in columns and rows; the proximity sensor of the second array comprises a plurality of configured columns And a line of sensors. 如申請專利範圍第10項的設備,其中在該第一陣列中的該等近接感測器列係相對於在該第二陣列中的該等近接感測器列被對齊。 The device of claim 10, wherein the proximity sensor arrays in the first array are aligned relative to the proximity sensor columns in the second array. 如申請專利範圍第10項的設備,其中在該第一陣列中的該等近接感測器行係相對於在該第二陣列中的該等近接感測器行被對齊。 The device of claim 10, wherein the proximity sensor rows in the first array are aligned relative to the proximity sensor rows in the second array. 如申請專利範圍第1項的設備,其中該第二本體是一使用者界面裝置。 The device of claim 1, wherein the second body is a user interface device. 如申請專利範圍第13項的設備,其中該使用者 界面是一鍵盤。 Such as the device of claim 13 of the patent scope, wherein the user The interface is a keyboard. 如申請專利範圍第1項的設備,其中該第二本體從該第一本體的下緣延伸出。 The apparatus of claim 1, wherein the second body extends from a lower edge of the first body. 如申請專利範圍第1項的設備,其中該第二本體從該第一本體的一側緣延伸出。 The device of claim 1, wherein the second body extends from a side edge of the first body. 如申請專利範圍第2項的設備,其中該邏輯分析器被建構來產生一物體的3D影像以回應該物體在一個或多個3D覆蓋區域內的偵測。 A device as claimed in claim 2, wherein the logic analyzer is constructed to generate a 3D image of an object to reflect the detection of the object within one or more 3D coverage areas. 如申請專利範圍第17項的設備,其中該邏輯分析器被建構來利用該3D影像實施真偽鑑定。 The device of claim 17, wherein the logic analyzer is configured to perform authenticity identification using the 3D image. 如申請專利範圍第17項的設備,其中該物體是使用者的手。 The device of claim 17, wherein the object is a user's hand. 如申請專利範圍第17項的設備,其中該物體是使用者的頭。 The device of claim 17, wherein the object is a user's head. 一種設備,其包含:一顯示器,其中該顯示器係用來支撐一第一陣列的近接感測器;一使用者界面裝置,其由該顯示器的一邊緣延伸出,其中該使用者界面裝置是用來支撐一第二陣列的近接感測器;及邏輯分析器,它的至少一部分是硬體,其被建構來至少部分根據來自該第一陣列的近接感測器及該第二陣列的近接感測器的輸入來偵測與該顯示器的三維度(3D)使用者互動, 其中該邏輯分析器被建構來將一介於該顯示器和該使用者界面裝置之間的3D區域界定成至少一和該等近接感測器的交會點相對應的3D覆蓋區域並偵測在該至少一3D覆蓋區域內的3D使用者互動。 An apparatus comprising: a display, wherein the display is for supporting a first array of proximity sensors; a user interface device extending from an edge of the display, wherein the user interface device is a proximity sensor for supporting a second array; and a logic analyzer, at least a portion of which is a hardware, configured to at least partially approximate the proximity of the proximity sensor and the second array from the first array The input of the detector detects the three-dimensional (3D) user interaction with the display, Wherein the logic analyzer is configured to define a 3D area between the display and the user interface device as at least one 3D coverage area corresponding to an intersection of the proximity sensors and detect at least 3D user interaction within a 3D coverage area. 如申請專利範圍第21項的設備,其中該邏輯分析器被建構來指出一物體在該至少一3D覆蓋區域的一3D覆蓋區域內的存在以回應和該3D覆蓋區域相關連的該等近接感測器的偵測。 The device of claim 21, wherein the logic analyzer is configured to indicate the presence of an object within a 3D coverage area of the at least one 3D coverage area in response to the proximity sense associated with the 3D coverage area Detector detection. 如申請專利範圍第21項的設備,其中該邏輯分析器被建構來指認該顯示器上的一項目的選取以回應一物體放置在一相對應的3D覆蓋區域內。 The device of claim 21, wherein the logic analyzer is configured to identify a purpose selection on the display in response to an object being placed in a corresponding 3D coverage area. 如申請專利範圍第23項的設備,其中該物體包括選自於一包含使用者的手指、使用者的手、及一指示裝置的名單中的至少一者。 The device of claim 23, wherein the object comprises at least one selected from the group consisting of a user's finger, a user's hand, and a list of pointing devices. 如申請專利範圍第23項的設備,其中該邏輯分析器被建構來將該被指認的項目反白以用於選取。 The apparatus of claim 23, wherein the logic analyzer is constructed to highlight the designated item for selection. 如申請專利範圍第22項的設備,其中該邏輯分析器被建構來指認該項目的選取以回應該物體朝向該顯示器的運動。 The device of claim 22, wherein the logic analyzer is configured to identify the selection of the item to reflect the movement of the object toward the display. 如申請專利範圍第21項的設備,其中該邏輯分析器被建構來指認該顯示器上一將被採取的動作以回應一物體通過該至少一3D覆蓋區的一或多個3D覆蓋區的運動的偵測。 The device of claim 21, wherein the logic analyzer is configured to identify an action to be taken on the display in response to movement of an object through one or more 3D coverage areas of the at least one 3D coverage area Detection. 如申請專利範圍第21項的設備,其中該邏輯分 析器被建構來產生一物體的3D影像以回應該物體在一個或多個3D覆蓋區域內的偵測。 Such as the device of claim 21, wherein the logical division The analyzer is constructed to generate a 3D image of an object to reflect the detection of the object within one or more 3D coverage areas. 如申請專利範圍第28項的設備,其中該邏輯分析器被建構來利用該3D影像鑑定一使用者的真偽。 The device of claim 28, wherein the logic analyzer is configured to utilize the 3D image to identify the authenticity of a user.
TW102146395A 2012-12-28 2013-12-16 Three-dimensional user interface device TWI516989B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
MYPI2012005686 2012-12-28

Publications (2)

Publication Number Publication Date
TW201435664A TW201435664A (en) 2014-09-16
TWI516989B true TWI516989B (en) 2016-01-11

Family

ID=51943376

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102146395A TWI516989B (en) 2012-12-28 2013-12-16 Three-dimensional user interface device

Country Status (1)

Country Link
TW (1) TWI516989B (en)

Also Published As

Publication number Publication date
TW201435664A (en) 2014-09-16

Similar Documents

Publication Publication Date Title
US9182854B2 (en) System and method for multi-touch interactions with a touch sensitive screen
US8749497B2 (en) Multi-touch shape drawing
NL2001670C2 (en) Fashion sensitive processing or touch data.
EP3232315B1 (en) Device and method for providing a user interface
CN201497974U (en) Device, electronic equipment and sensing system
CN105531719A (en) User input with fingerprint sensor
US9348466B2 (en) Touch discrimination using fisheye lens
JP2011503709A (en) Gesture detection for digitizer
US20150153832A1 (en) Visual feedback by identifying anatomical features of a hand
Guimbretière et al. Bimanual marking menu for near surface interactions
TW201039214A (en) Optical touch system and operating method thereof
Izadi et al. ThinSight: integrated optical multi-touch sensing through thin form-factor displays
US11409410B2 (en) User input interfaces
US20130088427A1 (en) Multiple input areas for pen-based computing
US8947378B2 (en) Portable electronic apparatus and touch sensing method
US11429147B2 (en) Electronic device with sensing strip
CN103823584A (en) Touch sensing method and portable electronic device
US9471155B2 (en) 3-dimensional human interface device
KR101348370B1 (en) variable display device and method for displaying thereof
TWI516989B (en) Three-dimensional user interface device
Olsen Jr et al. Spilling: Expanding hand held interaction to touch table displays
Braun et al. CapTap: combining capacitive gesture recognition and acoustic touch detection
KR20140086805A (en) Electronic apparatus, method for controlling the same and computer-readable recording medium
US20210382615A1 (en) Device and method for providing feedback for user input in electronic device
TWI485617B (en) Touch sensing method and portable electronic apparatus

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees