TW201931096A - Interaction method for user interface and electronic device thereof - Google Patents

Interaction method for user interface and electronic device thereof Download PDF

Info

Publication number
TW201931096A
TW201931096A TW107107288A TW107107288A TW201931096A TW 201931096 A TW201931096 A TW 201931096A TW 107107288 A TW107107288 A TW 107107288A TW 107107288 A TW107107288 A TW 107107288A TW 201931096 A TW201931096 A TW 201931096A
Authority
TW
Taiwan
Prior art keywords
page
user
indicator
head
electronic device
Prior art date
Application number
TW107107288A
Other languages
Chinese (zh)
Inventor
黃慕真
戴雅麗
江昱嫺
Original Assignee
大陸商上海蔚蘭動力科技有限公司
英屬開曼群島商麥迪創科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201711482541.2A external-priority patent/CN109992094A/en
Application filed by 大陸商上海蔚蘭動力科技有限公司, 英屬開曼群島商麥迪創科技股份有限公司 filed Critical 大陸商上海蔚蘭動力科技有限公司
Publication of TW201931096A publication Critical patent/TW201931096A/en

Links

Abstract

An interaction method for a user interface of an electronic device, the user interface including a display device, includes detecting a biometric signal of head of a user and controlling the display device to display a page according to the biometric signal of head; detecting a biometric signal of eyes of the user and controlling a position of an indicator on the page according to the biometric signal of eyes; and detecting an instruction of a user and controlling the electronic device to perform a specific function according to the position of the indicator on the page when the instruction is detected.

Description

用於使用者介面之互動方法及電子裝置Interactive method and electronic device for user interface

本發明係指一種用於使用者介面之互動方法,尤指一種可讓使用者在無需使用手部操作的情況下與使用者介面互動的方法。The present invention is directed to an interactive method for a user interface, and more particularly to a method for allowing a user to interact with a user interface without the need for hand manipulation.

電子裝置的使用者介面往往包含多個階層(hierarchy),當使用者進行操作時,其可能在同一層當中的不同頁面之間切換,或是切換到下一層頁面或畫面。例如,在一選單模式之下,使用者可能點選某一個項目而執行特定功能,或點選某一項目而進入下一階層的選單。一般來說,在電腦或手機上操作時,使用者需以觸控手勢進行,或藉由鍵盤或滑鼠等輸入介面輸入指令。換言之,傳統的手機或電腦操作方式皆需要使用單手或雙手來進行,且使用者需要長時間注視螢幕上的訊息以實現與電子裝置之使用者介面的互動。然而,上述操作方式無法應用於駕車時,若駕駛人需空出單手或雙手來操作電子裝置,容易分心而影響行車安全。有鑑於此,實有必要提出一種更適合的電子裝置互動方法,提供予汽車駕駛人使用。The user interface of the electronic device often includes multiple hierarchies. When the user operates, it may switch between different pages in the same layer or switch to the next page or screen. For example, in a menu mode, a user may click on a particular item to perform a particular function, or click on an item to proceed to the next level of the menu. Generally speaking, when operating on a computer or a mobile phone, the user needs to perform a touch gesture or input an instruction through an input interface such as a keyboard or a mouse. In other words, the traditional mobile phone or computer operation mode needs to be performed with one hand or both hands, and the user needs to watch the message on the screen for a long time to realize the interaction with the user interface of the electronic device. However, the above operation mode cannot be applied to driving, and if the driver needs to vacate one or both hands to operate the electronic device, it is easy to be distracted and affect driving safety. In view of this, it is necessary to propose a more suitable electronic device interaction method for use by motorists.

本發明之主要目的即在於提供一種可讓使用者在無需使用手部操作的情況下與使用者介面互動的方法,以解決駕駛人易分心的問題,同時減少駕駛人操作電子裝置的複雜度並改善行車安全。The main object of the present invention is to provide a method for allowing a user to interact with a user interface without using a hand operation to solve the problem of easy distraction of the driver and reduce the complexity of the driver operating the electronic device. And improve driving safety.

本發明揭露一種互動方法,用於一電子裝置之一使用者介面,該使用者介面包含一顯示裝置,該互動方法包含有:偵測一使用者之頭部生物辨識訊號,並根據該頭部生物辨識訊號控制該顯示裝置顯示一頁面;偵測該使用者之眼部生物辨識訊號,並根據該眼部生物辨識訊號控制一指示符在該頁面上的位置;以及偵測該使用者之一指令,並於偵測到該指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。The present invention discloses an interactive method for a user interface of an electronic device. The user interface includes a display device. The interaction method includes: detecting a biometric signal of a user's head, and according to the head The biometric signal controls the display device to display a page; detects an eye biometric signal of the user, and controls an indicator on the page according to the eye biometric signal; and detects one of the users And, when the instruction is detected, controlling the electronic device to perform a specific function according to the position of the indicator on the page.

本發明另揭露一種電子裝置,包含有一使用者介面及一處理裝置。該使用者介面用來與一使用者進行互動,該使用者介面包含有一顯示裝置及一感測裝置。該感測裝置用來偵測該使用者之頭部生物辨識訊號及眼部生物辨識訊號。該處理裝置耦接於該使用者介面,用來執行以下步驟:根據該頭部生物辨識訊號,控制該顯示裝置顯示一頁面;根據該眼部生物辨識訊號,控制一指示符在該頁面上的位置;以及於偵測到該使用者之一指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。The invention further discloses an electronic device comprising a user interface and a processing device. The user interface is for interacting with a user having a display device and a sensing device. The sensing device is configured to detect a biometric signal of the user's head and an eye biometric signal. The processing device is coupled to the user interface for performing the following steps: controlling the display device to display a page according to the biometric identification signal of the head; and controlling an indicator on the page according to the biometric identification signal of the eye a location; and upon detecting an instruction of the user, controlling the electronic device to perform a specific function based on the location of the indicator on the page.

本發明另揭露一種互動方法用於一電子裝置之一使用者介面,該使用者介面包含一顯示裝置,該互動方法包含有:偵測一使用者之一眼部生物辨識訊號,並根據該眼部生物辨識訊號控制一指示符在該顯示裝置顯示的一頁面上的位置;以及偵測該使用者之一臉部指令,並於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能;其中,該臉部指令是由複數個臉部動作組合而成。The present invention further discloses an interactive method for a user interface of an electronic device, the user interface comprising a display device, the interaction method comprising: detecting an eye biometric signal of a user, and according to the eye The biometric identification signal controls a position of an indicator on a page displayed by the display device; and detects a facial command of the user, and when the facial command is detected, according to the indicator on the page The upper position controls the electronic device to perform a specific function; wherein the facial command is composed of a plurality of facial actions.

本發明另揭露一種電子裝置,包含有一使用者介面及一處理裝置。該使用者介面用來與一使用者進行互動,該使用者介面包含有一顯示裝置及一感測裝置。該感測裝置用來偵測該使用者之一眼部生物辨識訊號及一臉部指令,其中,該臉部指令是由複數個臉部動作組合而成。該處理裝置耦接於該使用者介面,用來執行以下步驟:根據該眼部生物辨識訊號,控制一指示符在該顯示裝置顯示的一頁面上的位置;以及於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。The invention further discloses an electronic device comprising a user interface and a processing device. The user interface is for interacting with a user having a display device and a sensing device. The sensing device is configured to detect an eye biometric signal and a facial command of the user, wherein the facial command is composed of a plurality of facial actions. The processing device is coupled to the user interface for performing the following steps: controlling, according to the eye biometric signal, a position of an indicator on a page displayed by the display device; and detecting the facial command The electronic device is controlled to perform a specific function based on the location of the indicator on the page.

第1圖為本發明一實施例。電子裝置10包含有一使用者介面100及一處理裝置110。使用者介面100提供了與使用者互動的媒介,其包含有一顯示裝置102及一感測裝置104,電子裝置10亦可選擇性包含一使用者輸入裝置120。感測裝置104可用來偵測使用者頭部、眼部的生物辨識訊號。處理裝置110耦接於使用者介面100,可根據頭部生物辨識訊號控制顯示裝置102顯示一頁面;並根據眼部生物辨識訊號控制一指示符在該頁面上的位置。使用者輸入裝置120可接收使用者的命令,以根據該指示符在該頁面上的位置,控制電子裝置10執行一特定功能。Figure 1 is an embodiment of the invention. The electronic device 10 includes a user interface 100 and a processing device 110. The user interface 100 provides a medium for interacting with the user, and includes a display device 102 and a sensing device 104. The electronic device 10 can also optionally include a user input device 120. The sensing device 104 can be used to detect biometric signals of the user's head and eyes. The processing device 110 is coupled to the user interface 100, and controls the display device 102 to display a page according to the head biometric signal; and controls the position of an indicator on the page according to the eye biometric signal. The user input device 120 can receive a user's command to control the electronic device 10 to perform a particular function based on the location of the indicator on the page.

在一實施例中,感測裝置104可用來偵測使用者的眼部生物辨識訊號,並偵測使用者的臉部生物辨識訊號作為一臉部指令。處理裝置110可根據眼部生物辨識訊號控制一指示符在顯示裝置102顯示的一頁面上的位置,並於臉部指令被偵測到時,根據指示符在頁面上的位置控制電子裝置10執行一特定功能。上述臉部指令可由複數個臉部動作組合而成。In one embodiment, the sensing device 104 can be used to detect the user's eye biometric signal and detect the user's facial biometric signal as a facial command. The processing device 110 can control the position of an indicator on a page displayed by the display device 102 according to the eye biometric signal, and control the execution of the electronic device 10 according to the position of the indicator on the page when the face command is detected. A specific function. The above facial command can be composed of a plurality of facial actions.

在一實施例中,感測裝置104可用來偵測使用者頭部、眼部及臉部的生物辨識訊號。處理裝置110耦接於使用者介面100,可根據頭部生物辨識訊號控制顯示裝置102顯示一頁面;根據眼部生物辨識訊號控制一指示符在該頁面上的位置;並根據臉部生物辨識訊號以及該指示符在該頁面上的位置,控制電子裝置10執行一特定功能。In one embodiment, the sensing device 104 can be used to detect biometric signals of the user's head, eyes, and face. The processing device 110 is coupled to the user interface 100, and can control the display device 102 to display a page according to the head biometric signal; control the position of an indicator on the page according to the eye biometric signal; and according to the facial biometric signal And the location of the indicator on the page, the control electronics 10 performs a particular function.

在一實施例中,感測裝置104所偵測的使用者之頭部生物辨識訊號包含頭部姿勢、頭部位置、頭部移動,以及任何頭部姿勢、位置及/或移動的組合。頭部位置代表頭部與感測器之間的相對位置,或代表頭部與顯示器之間的相對位置。頭部姿勢代表點頭、搖頭或側頭等。In one embodiment, the user's head biometric signal detected by the sensing device 104 includes a head gesture, a head position, a head movement, and any combination of head posture, position, and/or movement. The head position represents the relative position between the head and the sensor or represents the relative position between the head and the display. The head posture represents a nod, a head or a side head.

在一實施例中,感測裝置104所偵測的使用者之眼部生物辨識訊號包含凝視向量及凝視點。凝視向量代表藉由頭部位置、頭部姿勢、虹膜反射光、虹膜位置及/或瞳孔位置所推導出的眼神凝視方向。凝視向量可實現於世界座標系、臉部座標系、或螢幕座標系。凝視點代表凝視向量與螢幕的交叉點。In one embodiment, the user's eye biometric signal detected by the sensing device 104 includes a gaze vector and a gaze point. The gaze vector represents the direction of the eye gaze derived by head position, head posture, iris reflected light, iris position, and/or pupil position. The gaze vector can be implemented in a world coordinate system, a facial coordinate system, or a screen coordinate system. The gaze point represents the intersection of the gaze vector and the screen.

在一實施例中,感測裝置104所偵測的使用者之臉部生物辨識訊號包含臉部特徵、臉部動作、一連串的臉部動作、或任何臉部特徵及/或動作的組合。In one embodiment, the user's facial biometric signal detected by the sensing device 104 includes a facial feature, a facial motion, a series of facial motions, or a combination of any facial features and/or motions.

在一實施例中,感測裝置104可包含任何類型的感測器,用來取得各種生物辨識訊號,如可見光攝影機、紅外線攝影機、光感測器、或多種感測器的組合,而不限於此。In an embodiment, the sensing device 104 can include any type of sensor for obtaining various biometric signals, such as a visible light camera, an infrared camera, a photo sensor, or a combination of sensors, without limitation. this.

在一實施例中,處理裝置110可將使用者頭部、眼部及/或臉部的生物辨識訊號轉換成不同指令,來控制顯示裝置102上顯示的畫面。In one embodiment, the processing device 110 can convert the biometric signals of the user's head, eyes, and/or face into different commands to control the display displayed on the display device 102.

在一實施例中,上述顯示頁面的方式包含藉由特定頭部動作來控制顯示的畫面平移(pan)。特定頭部動作可藉由頭部姿勢、頭部位置、頭部移動,或任何頭部姿勢、位置及/或移動的組合來實現。以第2A圖為例舉例說明,當使用者頭部向左傾時,可控制顯示的畫面向左平移。In an embodiment, the manner of displaying the page includes controlling a pan of the display by a specific head motion. Specific head motions may be achieved by a combination of head posture, head position, head movement, or any head posture, position, and/or movement. Taking FIG. 2A as an example, when the user's head is tilted to the left, the displayed screen can be controlled to shift to the left.

在一實施例中,上述顯示頁面的方式包含藉由特定頭部動作來控制顯示的畫面縮小(zoom-out)或放大(zoom-in)。特定頭部動作可藉由頭部姿勢、頭部位置、頭部移動,或任何頭部姿勢、位置及/或移動的組合來實現。以第2B圖為例舉例說明,當使用者頭部朝向螢幕的方向移動時,可控制顯示的畫面放大。In an embodiment, the manner of displaying the page includes controlling zoom-out or zoom-in of the display by a specific head motion. Specific head motions may be achieved by a combination of head posture, head position, head movement, or any head posture, position, and/or movement. Taking FIG. 2B as an example, when the user's head moves toward the screen, the screen for controlling the display can be enlarged.

為實現上述實施例中的平移及縮放,螢幕上主動區(active area)的背景區域可呈現一虛擬影像,主動區為虛擬影像的子集合且主動區的區域範圍小於虛擬影像的區域範圍。In order to implement the panning and zooming in the above embodiment, the background area of the active area on the screen may present a virtual image, the active area is a subset of the virtual image, and the active area has a smaller area range than the virtual image.

在一實施例中,上述控制一指示符在頁面上的位置的步驟可透過多種方式來實現。例如,可在螢幕的主動區上以游標來表示,或透過可辨識的顏色、框線、凸面或凹面等方式來標示特定的區域、視窗及/或圖像。以第2C圖為例舉例說明,在螢幕20上,游標及框線皆用來標示特定的項目。In an embodiment, the step of controlling the position of an indicator on the page can be implemented in a variety of ways. For example, a cursor may be used on the active area of the screen, or a specific area, window, and/or image may be indicated by recognizable colors, borders, convexities, or concaves. Taking the 2C figure as an example, on the screen 20, the cursor and the frame line are used to indicate a specific item.

在一實施例中,使用者輸入裝置120可透過任何方式接收使用者的命令,如藉由按壓實體按鍵、輸入特定手勢、或藉由觸控螢幕輸入等,但不限於此。In an embodiment, the user input device 120 can receive a user's command in any manner, such as by pressing a physical button, inputting a specific gesture, or inputting through a touch screen, etc., but is not limited thereto.

在一實施例中,上述根據該指示符在頁面上的位置,控制電子裝置10執行一特定功能之步驟可透過多種方式來實現。例如,根據指示符所選擇的區域、視窗及/或圖像可指示特定功能被執行,所述區域、視窗及/或圖像可對應呼叫特定功能或指令。舉例來說,呼叫功能可以是開啟應用程式、回到主選單、下拉選單、重新啟動等,但不限於此。In an embodiment, the step of controlling the electronic device 10 to perform a specific function according to the position of the indicator on the page may be implemented in various manners. For example, the region, window, and/or image selected based on the indicator may indicate that a particular function is being performed, the region, window, and/or image may correspond to a particular function or instruction. For example, the calling function can be to open the application, return to the main menu, pull down the menu, restart, etc., but is not limited thereto.

在此情形下,使用者可藉由頭部、眼部及/或臉部的生物辨識訊號來操作電子裝置10,同時查看顯示裝置102上的畫面內容,來達成使用者與使用者介面100的互動。In this case, the user can operate the electronic device 10 by using the biometric signals of the head, the eyes, and/or the face, and simultaneously view the content of the screen on the display device 102 to achieve the user and user interface 100. interactive.

在一實施例中,顯示裝置102可以是任何類型的顯示器,如液晶顯示器(Liquid Crystal Display,LCD)、有機發光二極體顯示器(Organic Light Emitting Diode Display,OLED Display)等。In an embodiment, the display device 102 can be any type of display, such as a liquid crystal display (LCD), an Organic Light Emitting Diode Display (OLED Display), or the like.

在一實施例中,處理裝置110可以是一中央處理器(Central Processing Unit,CPU)、一微處理器(microprocessor)、一微控制器單元(Micro Controller Unit,MCU)或一特定應用積體電路(Application-Specific Integrated Circuit,ASIC),其不限於透過硬體或軟體方式來實現。In an embodiment, the processing device 110 can be a central processing unit (CPU), a microprocessor, a micro controller unit (MCU), or a specific application integrated circuit. (Application-Specific Integrated Circuit, ASIC), which is not limited to being implemented by hardware or software.

在一實施例中,電子裝置10可以是一車用電子裝置。In an embodiment, the electronic device 10 can be a vehicle electronic device.

第3圖為本發明實施例一互動流程30之示意圖。互動流程30可用於一電子裝置,如第1圖中的電子裝置10,用來與使用者進行互動。如第3圖所示,互動流程30包含以下步驟:FIG. 3 is a schematic diagram of an interaction process 30 according to an embodiment of the present invention. The interactive process 30 can be used in an electronic device, such as the electronic device 10 in Figure 1, for interacting with a user. As shown in Figure 3, the interactive process 30 includes the following steps:

步驟300: 開始。Step 300: Start.

步驟302: 偵測使用者之一頭部生物辨識訊號,如頭部姿勢、頭部位置、頭部移動、或任何頭部姿勢、位置及/或移動的組合,並根據偵測到的頭部生物辨識訊號控制顯示裝置顯示一頁面。Step 302: Detecting a head biometric signal of the user, such as a head posture, a head position, a head movement, or a combination of any head posture, position, and/or movement, and according to the detected head The biometric signal control display device displays a page.

步驟304: 偵測使用者之一眼部生物辨識訊號,如凝視向量、凝視點、或凝視向量及凝視點的組合,並根據偵測到的眼部生物辨識訊號控制一指示符在頁面上的位置。Step 304: detecting one of the user's eye biometric signals, such as a gaze vector, a gaze point, or a combination of a gaze vector and a gaze point, and controlling an indicator on the page according to the detected eye biometric signal. position.

步驟306: 偵測使用者之一指令或臉部生物辨識訊號,並於偵測到指令或臉部生物辨識訊號時,根據指示符在頁面上的位置,控制電子裝置執行一特定功能。Step 306: Detect one of the user's instructions or the facial biometric signal, and when detecting the command or the facial biometric signal, control the electronic device to perform a specific function according to the position of the indicator on the page.

步驟308: 結束。Step 308: End.

根據互動流程30,感測裝置104可偵測使用者之頭部生物辨識訊號,處理裝置110即可根據頭部生物辨識訊號控制顯示裝置102顯示一頁面。舉例來說,處理裝置110可根據頭部生物辨識訊號,控制顯示裝置102上顯示的畫面向一特定方向捲動或移動。例如駕駛人可能正在一選單上找尋應用程式、聯絡人、或模式切換之選項,若感測裝置104偵測到駕駛人的頭部向左移動時,顯示裝置102上的畫面可向左移動,以顯示更右側的選項;反之,若感測裝置104偵測到駕駛人的頭部向右移動時,顯示裝置102上的畫面可向右移動,以顯示更左側的選項。在另一實施例中,也可藉由向左的頭部動作來控制顯示裝置102顯示左側的選項,並藉由向右的頭部動作來控制顯示裝置102顯示右側的選項。或者,駕駛人可能正在瀏覽某一個頁面,處理裝置110可根據偵測到的頭部生物辨識訊號來捲動頁面,如同使用者以滑鼠滾輪捲動頁面的方式,但在此例中,使用者無需動手即可完成。According to the interaction process 30, the sensing device 104 can detect the user's head biometric signal, and the processing device 110 can control the display device 102 to display a page according to the head biometric signal. For example, the processing device 110 can control the screen displayed on the display device 102 to scroll or move in a specific direction according to the head biometric signal. For example, the driver may be looking for an application, contact, or mode switching option on a menu. If the sensing device 104 detects that the driver's head is moving to the left, the screen on the display device 102 can be moved to the left. To display the more right option; conversely, if the sensing device 104 detects that the driver's head is moving to the right, the screen on the display device 102 can be moved to the right to display the more left option. In another embodiment, the display device 102 can also be controlled to display the option on the left side by the left head motion, and the display device 102 can be controlled to display the right option by the right head motion. Alternatively, the driver may be browsing a certain page, and the processing device 110 may scroll the page according to the detected head biometric signal, just as the user scrolls the page with the mouse wheel, but in this case, You don't need to do it yourself.

在另一實施例中,處理裝置110亦可根據頭部生物辨識訊號來控制顯示裝置102上顯示的畫面放大(zoom-in)或縮小(zoom-out)。例如當偵測到駕駛人頭部靠近感測裝置104時,可控制畫面放大,以突顯使用者欲查看的選項;當偵測到駕駛人頭部遠離感測裝置104時,控制畫面縮小,使畫面中包含更多選項供駕駛人選擇。此外,除了偵測頭部移動的方向以外,處理裝置110亦可偵測頭部是否向左或向右傾斜或轉向,以進行頭部動作的判斷。例如頭部左傾可控制頁面向左捲動,右傾可控制頁面向右捲動。任何可藉由頭部生物辨識訊號來控制頁面顯示的方式皆屬於本發明的範疇。In another embodiment, the processing device 110 can also control zoom-in or zoom-out displayed on the display device 102 according to the head biometric signal. For example, when it is detected that the driver's head is close to the sensing device 104, the screen can be enlarged to highlight the option that the user wants to view; when the driver's head is detected to be away from the sensing device 104, the control screen is zoomed out. The screen contains more options for the driver to choose. In addition, in addition to detecting the direction in which the head moves, the processing device 110 can also detect whether the head is tilted or turned to the left or right to determine the head motion. For example, the left head tilt can control the page to scroll to the left, and the right tilt can control the page to scroll to the right. Any manner in which the display of the page can be controlled by the head biometric signal is within the scope of the present invention.

第4A~4C圖繪示一種藉由頭部生物辨識訊號來控制顯示裝置102顯示一頁面400的詳細運作方式。頁面400上包含一表單,其包含有多個選項,如設定、資訊模式、GPS模式等。如第4A圖所示,頁面400僅顯示完整的「資訊模式」選項,其它選項未完整顯示於頁面400。當使用者欲觀看其它選項時,可藉由頭部姿勢、位置及/或移動來控制頁面400顯示的內容。如第4B圖所示,使用者可將頭部向右移動。當感測裝置104偵測到頭部右移時,可將畫面的內容傾斜,以突顯右側的選項,此時使用者可輕易地選擇右側的「GPS模式」選項。如第4C圖所示,使用者可將頭部向左移動,當感測裝置104偵測到頭部左移時,可將畫面的內容向另一方向傾斜,以突顯左側的選項,此時使用者可輕易地選擇左側的「設定」選項。4A-4C illustrate a detailed operation mode of controlling the display device 102 to display a page 400 by the head biometric signal. The page 400 contains a form containing a number of options, such as settings, information mode, GPS mode, and the like. As shown in FIG. 4A, page 400 only displays the complete "Information Mode" option, and other options are not fully displayed on page 400. When the user wants to view other options, the content displayed by page 400 can be controlled by head gestures, positions, and/or movements. As shown in Fig. 4B, the user can move the head to the right. When the sensing device 104 detects that the head is shifted to the right, the content of the screen can be tilted to highlight the option on the right side, and the user can easily select the "GPS mode" option on the right side. As shown in FIG. 4C, the user can move the head to the left. When the sensing device 104 detects that the head is shifted to the left, the content of the screen can be tilted to the other direction to highlight the option on the left side. The user can easily select the "Settings" option on the left.

在上述實施例中,頁面400顯示的表單為多個選項排成一列;而在另一實施例中,多個選項可排列為圓弧形。在此情形下,當偵測到使用者頭部右移時,可控制選項依順時針方向移動;當偵測到使用者頭部左移時,可控制選項依逆時針方向移動。In the above embodiment, the form displayed by the page 400 is arranged in a row for a plurality of options; and in another embodiment, the plurality of options may be arranged in a circular arc shape. In this case, when the user's head is detected to move to the right, the controllable option moves in a clockwise direction; when the user's head is detected to move to the left, the controllable option moves in a counterclockwise direction.

在此例中,電子裝置10可以是例如一車用電子裝置,能夠由車輛駕駛人方便地進行操控。因此,顯示裝置102可以是車輛的儀表板,其顯示的頁面400及表單包含各種相關於行車的資訊。例如,進入「資訊模式」選項之後,顯示裝置102上可顯示各種行車資訊,如天氣、時間、行駛路程等;進入「GPS模式」選項之後,顯示裝置102上可顯示GPS導航資訊,如地圖和目的地資訊等;「設定」選項則用來進入設定選單,以進行各項設定。表單上的選項不限於上述實施例的說明,在其它實施例中,亦可包含時速表、行車記錄器、連結行動裝置等選項。上述選項皆可設定於第4A~4C圖之表單中,藉由頭部姿勢、位置及/或移動來控制表單向左或向右捲動或移動以顯示不同選項。In this example, the electronic device 10 can be, for example, a vehicle electronic device that can be easily manipulated by a vehicle driver. Thus, display device 102 can be a dashboard of a vehicle that displays pages 400 and forms containing various information related to driving. For example, after entering the "information mode" option, the display device 102 can display various driving information, such as weather, time, driving distance, etc.; after entering the "GPS mode" option, the display device 102 can display GPS navigation information, such as a map and Destination information, etc.; the "Settings" option is used to enter the settings menu to make various settings. The options on the form are not limited to the description of the above embodiments, and in other embodiments, options such as a speedometer, a driving recorder, a linking mobile device, and the like may also be included. All of the above options can be set in the form of Figures 4A-4C to control the form to scroll or move left or right by head gesture, position and/or movement to display different options.

感測裝置104可偵測使用者之眼部生物辨識訊號,處理裝置110即可根據眼部生物辨識訊號控制一指示符在頁面上的位置。指示符可透過各種形式來呈現,如游標、圖案、框線、或透過特定的顏色來指示使用者欲選擇的項目或區域。第5圖說明當使用者凝視「資訊模式」選項時,處理裝置110可控制指示符移動至「資訊模式」選項上。在此例中,指示符的顯示方式為,在所指示的選項上以較粗體的框線表示,但本領域具通常知識者應了解,指示符亦可透過其它方式來呈現,例如以較明亮的框線來標注欲選擇的選項,或在欲選擇的選項上以特定顏色來標注,只要指示符的表示方式能夠讓使用者明確知道指示符所在位置即可。在另一實施例中,指示符亦可以游標的形式呈現,並根據使用者眼神凝視的位置控制游標向任意方向移動。以眼睛凝視來控制指示符移動如同使用者以滑鼠或滑動觸控手勢控制指示符移動。The sensing device 104 can detect the eye biometric signal of the user, and the processing device 110 can control the position of an indicator on the page according to the eye biometric signal. The indicator can be presented in various forms, such as a cursor, a pattern, a frame line, or a particular color to indicate the item or area that the user desires to select. Figure 5 illustrates that when the user gaze at the "Information Mode" option, the processing device 110 can control the indicator to move to the "Information Mode" option. In this example, the indicator is displayed in a bolder frame on the indicated option, but those of ordinary skill in the art should understand that the indicator can also be presented in other ways, for example by comparison. A bright border is used to mark the option you want to select, or to mark it in a specific color on the option you want to select, as long as the indicator is displayed in a way that lets the user know exactly where the indicator is. In another embodiment, the indicator can also be presented in the form of a cursor and the cursor can be moved in any direction depending on where the user is staring. The eye movement is controlled by eye gaze as if the user is moving with the mouse or sliding touch gesture control indicator.

使用者輸入裝置120可供使用者輸入指令,使處理裝置110根據指示符在頁面上的位置,控制電子裝置10執行一特定功能。在一實施例中,控制電子裝置10執行特定功能的指令為感測裝置104所偵測的使用者之臉部指令,若上述藉由眼睛凝視來控制指示符移動的方式可類推為透過滑鼠或滑動觸控手勢控制指示符移動,那麼藉由臉部指令來控制電子裝置10執行特定功能可類推為點擊滑鼠、鍵盤或觸控螢幕之確認鍵以執行特定功能。The user input device 120 is operable for the user to input an instruction to cause the processing device 110 to control the electronic device 10 to perform a particular function based on the position of the indicator on the page. In an embodiment, the instruction to control the electronic device 10 to perform a specific function is a facial command of the user detected by the sensing device 104, and the manner of controlling the movement of the indicator by eye gaze can be analogized to pass the mouse. Or sliding the touch gesture control indicator to move, then controlling the electronic device 10 to perform a specific function by using a face command can be analogized to a click mouse, a keyboard or a touch screen confirmation key to perform a specific function.

臉部指令可透過多種方式來實現。舉例來說,臉部指令可以是任何臉部表情,例如微笑、大笑、張嘴、鼻子歪一側等,臉部表情也可以是一連串臉部動作的動態組合。臉部表情可預先設定,舉例來說,使用者可設定微笑(即嘴角上揚)作為臉部指令,當感測裝置104偵測到使用者嘴角上揚時,即可判斷接收到微笑的臉部指令。以第4A~5圖為例舉例說明如下。使用者可透過眼神控制指示符在不同選項之間移動,若使用者欲選擇「資訊模式」選項時,可控制指示符移動至「資訊模式」選項上,並開始微笑。當感測裝置104偵測到微笑之後,處理裝置110即可判斷接收到一確認指令,進而控制電子裝置10進入資訊模式,例如透過顯示裝置102顯示各種行車資訊。Face commands can be implemented in a variety of ways. For example, the facial command can be any facial expression, such as a smile, a laugh, a mouth, a side of the nose, etc., and a facial expression can also be a dynamic combination of a series of facial actions. The facial expression can be preset. For example, the user can set a smile (ie, the mouth is raised) as a facial command. When the sensing device 104 detects that the user's mouth is raised, the facial command can be judged to receive a smile. . Taking the pictures 4A to 5 as an example, the following is an example. The user can move between different options through the eye control indicator. If the user wants to select the "Information Mode" option, the control indicator can be moved to the "Information Mode" option and start to smile. After the sensing device 104 detects the smile, the processing device 110 can determine that a confirmation command is received, thereby controlling the electronic device 10 to enter the information mode, for example, displaying various driving information through the display device 102.

在另一實施例中,每一使用者可設定各自的臉部表情。處理裝置110除了判斷臉部表情之外,還須判斷個人臉部特徵。在一實施例中,一輛汽車可能由夫妻共同使用,夫妻可能使用不同的臉部表情設定,例如丈夫設定微笑作為確認之臉部指令,妻子設定眨眼作為確認之臉部指令。當丈夫開車時,系統可根據臉部特徵判斷駕車者為丈夫,並於偵測到微笑的表情時判斷接收到確認指令;當妻子開車時,系統可根據臉部特徵判斷駕車者為妻子,並於偵測到眨眼的表情時判斷接收到確認指令。In another embodiment, each user can set a respective facial expression. In addition to determining facial expressions, the processing device 110 must determine personal facial features. In one embodiment, a car may be used by both spouses, and the couple may use different facial expression settings, such as the husband setting a smile as a confirmed facial command, and the wife setting a wink as a confirmed facial command. When the husband drives the car, the system can judge the driver as the husband according to the facial features, and judges that the confirmation command is received when the smiling expression is detected; when the wife drives the car, the system can judge the driver as the wife according to the facial features, and It is judged that a confirmation command is received when a blinking expression is detected.

上述關於頭部、眼部及臉部的偵測皆可透過電腦視覺(Computer Vision)技術來實現。感測裝置104可取得包含有使用者頭部的影像,接著,處理裝置110可對該影像進行縮減取樣(Down Sampling)以降低資料量,接著透過人臉偵測技術取出使用者的頭部影像,以設定對應於頭部影像的邊界框(Bounding Box)。縮減取樣可減少資料量並提高後續運算處理的速度,但在另一實施例中,亦可省略縮減取樣的步驟,直接對感測裝置104所取得的影像執行人臉偵測技術。處理裝置110截取邊界框內部的影像內容以進行後續處理。相關於頭部、眼部及臉部動作的判斷皆可由邊界框內部影像取得。The above detection of the head, eyes and face can be achieved by Computer Vision technology. The sensing device 104 can obtain an image including the user's head. Then, the processing device 110 can downsample the image to reduce the amount of data, and then take the user's head image through the face detection technology. To set a Bounding Box corresponding to the head image. Downsampling can reduce the amount of data and increase the speed of subsequent processing. However, in another embodiment, the step of downsampling can be omitted, and the face detection technology can be directly performed on the image obtained by the sensing device 104. The processing device 110 intercepts the image content inside the bounding box for subsequent processing. The judgments related to the head, eye and face actions can be obtained from the internal image of the bounding box.

處理裝置110可透過人臉特徵點偵測(Facial Landmark Detection)技術取得各個臉上器官,如眼睛、嘴巴、鼻子等,的特徵點位置。處理裝置110可取得使用者之頭部生物辨識訊號以判斷頭部動作,如頭部的位置、是否移動、轉動或傾斜等。處理裝置110可針對眼睛的影像判斷瞳孔的位置及/或瞳孔反射的光線,以取得凝視向量及凝視點。處理裝置110另可根據臉上器官的局部位置變化,或根據臉部特徵的一連串動態變化來判斷臉部表情,例如當偵測到嘴角上揚時判斷為微笑,同時可根據各個臉上器官的整體位置來判斷臉部特徵。本發明可藉由偵測頭部、眼部及臉部之生物辨識訊號來操控電子裝置運作,而關於頭部、眼部及臉部的偵測可採用各種可行的方式,其不應為本發明之限制。The processing device 110 can obtain the feature point positions of various face organs, such as eyes, mouth, nose, etc., through the Facial Landmark Detection technology. The processing device 110 can obtain the head biometric signal of the user to determine the head motion, such as the position of the head, whether it is moving, rotating or tilting. The processing device 110 can determine the position of the pupil and/or the light reflected by the pupil for the image of the eye to obtain the gaze vector and the gaze point. The processing device 110 may further determine the facial expression according to a local position change of the face organ or according to a series of dynamic changes of the facial features, for example, when it is detected that the corner of the mouth is raised, it is determined to be a smile, and according to the whole body of each face. Position to determine facial features. The invention can control the operation of the electronic device by detecting the biometric signals of the head, the eyes and the face, and the detection of the head, the eyes and the face can adopt various feasible methods, which should not be Limitations of the invention.

本發明提供了一種可讓使用者在無需使用手部操作的情況下與使用者介面進行互動的方法,本領域具通常知識者當可據以進行修飾或變化,而不限於此。舉例來說,在上述實施例中,系統對使用者頭部、眼部及臉部進行偵測以取得指令,以在使用者介面上顯示相對應的頁面同時在電子裝置上執行特定功能,使用者無須藉由手勢輸入指令即可與電子裝置進行互動。但在另一實施例中,使用者亦可藉由按壓實體按鍵或以特定手勢來執行確認指令,或口述指令來控制電子裝置執行特定功能(按壓實體按鍵、手勢及口述指令皆可用以代替前述臉部指令)。一般來說,車用電子系統可在方向盤上設置多個實體按鍵,在不降低方便性及行車安全的情形下,亦可改由實體按鍵作為輸入確認指令的媒介,駕駛人可輕易地按壓方向盤上的按鍵以進行輸入。或者,處理裝置110亦可具備語音偵測/辨識功能,以供使用者輸入語音指令。此外,本發明所提出的互動方法不僅可用於車用電子系統,亦可應用於其它類型的電子系統,如遊戲機、電子書閱讀器等。只要使用者介面可藉由偵測使用者頭部、眼部及臉部動作及/或特徵來實現與使用者互動的運作方式,皆屬於本發明的範疇。The present invention provides a method for allowing a user to interact with a user interface without the use of a hand operation, and those skilled in the art can modify or vary it without limitation. For example, in the above embodiment, the system detects the user's head, eyes, and face to obtain an instruction to display a corresponding page on the user interface while performing a specific function on the electronic device. The user can interact with the electronic device without inputting a gesture by gesture. In another embodiment, the user can also perform an acknowledgment command by pressing a physical button or a specific gesture, or a dictation command to control the electronic device to perform a specific function (pressing a physical button, a gesture, and a dictation command can be used instead of the foregoing. Face command). In general, the vehicle electronic system can set a plurality of physical buttons on the steering wheel, and the physical button can be used as a medium for input confirmation commands without reducing the convenience and driving safety, and the driver can easily press the steering wheel. Press the button on to enter. Alternatively, the processing device 110 may also have a voice detection/recognition function for the user to input voice commands. In addition, the interactive method proposed by the present invention can be applied not only to a vehicle electronic system but also to other types of electronic systems, such as game machines, e-book readers, and the like. It is within the scope of the present invention to allow the user interface to interact with the user by detecting the user's head, eye and face movements and/or features.

在本發明之一實施例中,電子裝置之使用者介面可藉由偵測使用者的頭部生物辨識訊號來進行頁面顯示,隨著頭部動作來控制頁面的捲動、移動及/或縮放。接著偵測使用者之眼部生物辨識訊號,以根據使用者眼神凝視的位置來控制一指示符在頁面上的位置。接著,當偵測到一特定臉部指令時,電子裝置可執行特定功能,如進入某一選單或操作模式。在此情形下,使用者可在無須藉由手部操作的情況下,透過頭部、眼部和臉部輸入指令,以實現與使用者介面的互動。 以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。In an embodiment of the present invention, the user interface of the electronic device can display the page by detecting the biometric signal of the user's head, and control the scrolling, moving, and/or zooming of the page as the head moves. . The user's eye biometric signal is then detected to control the position of an indicator on the page based on the location of the user's eye gaze. Then, when a particular facial command is detected, the electronic device can perform a particular function, such as entering a menu or mode of operation. In this case, the user can input commands through the head, eyes and face without having to operate by hand to achieve interaction with the user interface. The above are only the preferred embodiments of the present invention, and all changes and modifications made to the scope of the present invention should be within the scope of the present invention.

10‧‧‧電子裝置10‧‧‧Electronic devices

100‧‧‧使用者介面100‧‧‧User interface

102‧‧‧顯示裝置102‧‧‧ display device

104‧‧‧感測裝置104‧‧‧Sensing device

110‧‧‧處理裝置110‧‧‧Processing device

120‧‧‧使用者輸入裝置120‧‧‧User input device

30‧‧‧互動流程30‧‧‧Interactive process

300~308‧‧‧步驟300 to 308 ‧ steps

400‧‧‧頁面 400‧‧‧ page

第1圖為本發明實施例一電子裝置之示意圖。 第2A圖為本發明實施例藉由頭部動作來控制顯示畫面平移之示意圖。 第2B圖為本發明實施例藉由頭部動作來控制顯示畫面縮小或放大之示意圖。 第2C圖為本發明實施例藉由指示符進行標示之示意圖。 第3圖為本發明實施例一互動流程之示意圖。 第4A~4C圖繪示一種藉由頭部動作來控制顯示裝置顯示一頁面的詳細運作方式。 第5圖為使用者藉由凝視來控制指示符移動之示意圖。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention. FIG. 2A is a schematic diagram of controlling display screen shift by head motion according to an embodiment of the present invention. FIG. 2B is a schematic diagram of controlling the reduction or enlargement of the display screen by the head motion according to the embodiment of the present invention. FIG. 2C is a schematic diagram of an indicator according to an embodiment of the present invention. FIG. 3 is a schematic diagram of an interaction process according to an embodiment of the present invention. 4A-4C illustrate a detailed operation mode of controlling a display device to display a page by a head motion. Figure 5 is a schematic diagram of the user controlling the movement of the indicator by gazing.

Claims (16)

一種互動方法,用於一電子裝置之一使用者介面,該使用者介面包含一顯示裝置,該互動方法包含有: 偵測一使用者之一頭部生物辨識訊號,並根據該頭部生物辨識訊號控制該顯示裝置顯示一頁面; 偵測該使用者之一眼部生物辨識訊號,並根據該眼部生物辨識訊號控制一指示符在該頁面上的位置;以及 偵測該使用者之一指令,並於偵測到該指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。An interactive method for a user interface of an electronic device, the user interface includes a display device, the interaction method includes: detecting a biometric signal of a head of a user, and identifying the biometric according to the head The signal controls the display device to display a page; detecting an eye biometric signal of the user, and controlling the position of the indicator on the page according to the eye biometric signal; and detecting an instruction of the user And when detecting the instruction, controlling the electronic device to perform a specific function according to the position of the indicator on the page. 如請求項1所述之互動方法,其中根據該頭部生物辨識訊號控制該顯示裝置顯示該頁面之步驟包含有: 根據該頭部生物辨識訊號,控制該顯示裝置上顯示的畫面向一特定方向捲動或移動,或控制該顯示裝置上顯示的畫面放大或縮小,以顯示該頁面。The interactive method of claim 1, wherein the step of controlling the display device to display the page according to the biometric identification signal comprises: controlling, according to the biometric identification signal of the head, a screen displayed on the display device to a specific direction Scroll or move, or control the screen displayed on the display device to zoom in or out to display the page. 如請求項1所述之互動方法,其中該指示符包含有一游標、一圖案、一框線、或特定顏色,用來指示欲選擇的項目或區域。The interaction method of claim 1, wherein the indicator comprises a cursor, a pattern, a frame line, or a specific color for indicating an item or region to be selected. 如請求項1所述之互動方法,其中該指令包含有一靜態臉部指令或複數個臉部動作的動態組合。The interactive method of claim 1, wherein the instruction comprises a static facial command or a dynamic combination of a plurality of facial actions. 如請求項1所述之互動方法,其中該頁面上顯示一表單,該表單包含複數個選項,以及於偵測到該指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行該特定功能之步驟包含有: 於偵測到該指令時,根據該指示符在該頁面上的位置,選擇該複數個選項當中被該指示符指示之一者。The interactive method of claim 1, wherein the form displays a form, the form includes a plurality of options, and when the instruction is detected, controlling the electronic device to execute according to the position of the indicator on the page. The step of the specific function includes: when detecting the instruction, selecting one of the plurality of options indicated by the indicator according to the position of the indicator on the page. 一種電子裝置,包含有: 一使用者介面,用來與一使用者進行互動,該使用者介面包含有: 一顯示裝置;以及 一感測裝置,用來偵測該使用者之一頭部生物辨識訊號及一眼部生物辨識訊號;以及 一處理裝置,耦接於該使用者介面,用來執行以下步驟: 根據該頭部生物辨識訊號,控制該顯示裝置顯示一頁面; 根據該眼部生物辨識訊號,控制一指示符在該頁面上的位置;以及 於偵測到該使用者之一指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。An electronic device includes: a user interface for interacting with a user, the user bread comprises: a display device; and a sensing device for detecting a head creature of the user An identification device and an eye biometric signal; and a processing device coupled to the user interface for performing the following steps: controlling the display device to display a page according to the head biometric signal; Identifying a signal, controlling the position of an indicator on the page; and when detecting an instruction of the user, controlling the electronic device to perform a specific function according to the position of the indicator on the page. 如請求項6所述之電子裝置,其中根據該頭部生物辨識訊號控制該顯示裝置顯示該頁面之步驟包含有: 根據該頭部生物辨識訊號,控制該顯示裝置上顯示的畫面向一特定方向捲動或移動,或控制該顯示裝置上顯示的畫面放大或縮小,以顯示該頁面。The electronic device of claim 6, wherein the step of controlling the display device to display the page according to the biometric identification signal comprises: controlling, according to the biometric identification signal of the head, a screen displayed on the display device to a specific direction Scroll or move, or control the screen displayed on the display device to zoom in or out to display the page. 如請求項6所述之電子裝置,其中該指示符包含有一游標、一圖案、一框線、或特定顏色,用來指示欲選擇的項目或區域。The electronic device of claim 6, wherein the indicator comprises a cursor, a pattern, a frame line, or a specific color for indicating an item or region to be selected. 如請求項6所述之電子裝置,其中該指令包含有一靜態臉部指令或複數個臉部動作的動態組合。The electronic device of claim 6, wherein the instruction comprises a static facial command or a dynamic combination of a plurality of facial actions. 如請求項6所述之電子裝置,其中該頁面上顯示一表單,該表單包含複數個選項,以及於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行該特定功能之步驟包含有: 於偵測到該臉部指令時,根據該指示符在該頁面上的位置,選擇該複數個選項當中被該指示符指示之一者。The electronic device of claim 6, wherein the form displays a form, the form includes a plurality of options, and when the face instruction is detected, controlling the electronic device according to the position of the indicator on the page The step of the device performing the specific function includes: when detecting the facial command, selecting one of the plurality of options indicated by the indicator according to the position of the indicator on the page. 一種互動方法,用於一電子裝置之一使用者介面,該使用者介面包含一顯示裝置,該互動方法包含有: 偵測一使用者之一眼部生物辨識訊號,並根據該眼部生物辨識訊號控制一指示符在該顯示裝置顯示的一頁面上的位置;以及 偵測該使用者之一臉部指令,並於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能; 其中,該臉部指令是由複數個臉部動作組合而成。An interactive method for a user interface of an electronic device, the user interface comprising a display device, the interaction method comprising: detecting an eye biometric signal of a user, and identifying the eye according to the eye The signal controls an indication of a position on a page displayed by the display device; and detects a face command of the user, and when the face command is detected, according to the position of the indicator on the page Controlling the electronic device to perform a specific function; wherein the facial command is composed of a plurality of facial actions. 如請求項11所述之互動方法,另包含有: 偵測該使用者之一頭部生物辨識訊號,並根據該頭部生物辨識訊號控制該顯示裝置顯示該頁面。The interaction method of claim 11, further comprising: detecting a biometric signal of the head of the user, and controlling the display device to display the page according to the biometric identification signal. 如請求項12所述之互動方法,其中根據該頭部生物辨識訊號控制該顯示裝置顯示該頁面之步驟包含有: 根據該頭部生物辨識訊號,控制該顯示裝置上顯示的畫面向一特定方向捲動或移動,或控制該顯示裝置上顯示的畫面放大或縮小,以顯示該頁面。The interactive method of claim 12, wherein the step of controlling the display device to display the page according to the head biometric signal comprises: controlling, according to the head biometric signal, a screen displayed on the display device to a specific direction Scroll or move, or control the screen displayed on the display device to zoom in or out to display the page. 如請求項11所述之互動方法,其中該指示符包含有一游標、一圖案、一框線、或特定顏色,用來指示欲選擇的項目或區域。The interactive method of claim 11, wherein the indicator comprises a cursor, a pattern, a frame, or a specific color for indicating an item or region to be selected. 如請求項11所述之互動方法,其中該頁面上顯示一表單,該表單包含複數個選項,以及於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行該特定功能之步驟包含有: 於偵測到該臉部指令時,根據該指示符在該頁面上的位置,選擇該複數個選項當中被該指示符指示之一者。The interactive method of claim 11, wherein the form displays a form, the form includes a plurality of options, and when the face instruction is detected, controlling the electronic device according to the position of the indicator on the page The step of the device performing the specific function includes: when detecting the facial command, selecting one of the plurality of options indicated by the indicator according to the position of the indicator on the page. 一種電子裝置,包含有: 一使用者介面,用來與一使用者進行互動,該使用者介面包含有: 一顯示裝置;以及 一感測裝置,用來偵測該使用者之一眼部生物辨識訊號及一臉部指令,其中,該臉部指令是由複數個臉部動作組合而成;以及 一處理裝置,耦接於該使用者介面,用來執行以下步驟: 根據該眼部生物辨識訊號,控制一指示符在該顯示裝置顯示的一頁面上的位置;以及 於偵測到該臉部指令時,根據該指示符在該頁面上的位置,控制該電子裝置執行一特定功能。An electronic device includes: a user interface for interacting with a user, the user bread comprises: a display device; and a sensing device for detecting an eye creature of the user An identification signal and a facial command, wherein the facial command is a combination of a plurality of facial motions; and a processing device coupled to the user interface for performing the following steps: a signal that controls a position of an indicator on a page displayed by the display device; and when the face command is detected, controls the electronic device to perform a specific function according to the position of the indicator on the page.
TW107107288A 2017-12-29 2018-03-05 Interaction method for user interface and electronic device thereof TW201931096A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201711482541.2A CN109992094A (en) 2017-12-29 2017-12-29 Interactive approach and electronic device for user interface
CN201721900090 2017-12-29
??201711482541.2 2017-12-29
??201721900090.5 2017-12-29

Publications (1)

Publication Number Publication Date
TW201931096A true TW201931096A (en) 2019-08-01

Family

ID=63256751

Family Applications (2)

Application Number Title Priority Date Filing Date
TW107202864U TWM561857U (en) 2017-12-29 2018-03-05 Electronic device interacting with user
TW107107288A TW201931096A (en) 2017-12-29 2018-03-05 Interaction method for user interface and electronic device thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW107202864U TWM561857U (en) 2017-12-29 2018-03-05 Electronic device interacting with user

Country Status (1)

Country Link
TW (2) TWM561857U (en)

Also Published As

Publication number Publication date
TWM561857U (en) 2018-06-11

Similar Documents

Publication Publication Date Title
US10678351B2 (en) Devices and methods for providing an indication as to whether a message is typed or drawn on an electronic device with a touch-sensitive display
US9830071B1 (en) Text-entry for a computing device
US9946338B2 (en) Information processing to vary screen display based on a gaze point of the user
US9791918B2 (en) Breath-sensitive digital interface
JP5900393B2 (en) Information processing apparatus, operation control method, and program
US9740297B2 (en) Motion-based character selection
RU2541852C2 (en) Device and method of controlling user interface based on movements
US11527220B1 (en) Wearable display system for portable computing devices
KR100899610B1 (en) Electronic device and a method for controlling the functions of the electronic device as well as a program product for implementing the method
KR101919009B1 (en) Method for controlling using eye action and device thereof
US10521101B2 (en) Scroll mode for touch/pointing control
KR20160027732A (en) Display device and controlling method thereof
WO2014178039A1 (en) Scrolling electronic documents with a smartphone
US20190138086A1 (en) Human machine interface
TW201931096A (en) Interaction method for user interface and electronic device thereof
CN109992094A (en) Interactive approach and electronic device for user interface
US20240103681A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
US20240152245A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
WO2013057907A1 (en) Input device, display processing method, and storage medium in which display processing program is stored