TW202221474A - Operating method by gestures in extended reality and head-mounted display system - Google Patents
Operating method by gestures in extended reality and head-mounted display system Download PDFInfo
- Publication number
- TW202221474A TW202221474A TW109144823A TW109144823A TW202221474A TW 202221474 A TW202221474 A TW 202221474A TW 109144823 A TW109144823 A TW 109144823A TW 109144823 A TW109144823 A TW 109144823A TW 202221474 A TW202221474 A TW 202221474A
- Authority
- TW
- Taiwan
- Prior art keywords
- gesture
- hand
- menu
- image
- user
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
本發明是有關於一種虛擬模擬,且特別是有關於一種延伸實境(extended reality;XR)中透過手勢的操作方法和頭戴式顯示器系統。The present invention relates to a virtual simulation, and more particularly, to an operation method and a head-mounted display system through gestures in extended reality (XR).
現今流行用於模擬感覺、感知和/或環境的延伸實境(XR)技術,例如虛擬實境(virtual reality;VR)、擴增實境(augmented reality;AR)以及混合實境(mixed reality;MR)。前述技術可應用於多個領域中,例如遊戲、軍事訓練、醫療保健、遠端工作等。Extended reality (XR) technologies for simulating sensations, perceptions and/or environments are popular today, such as virtual reality (VR), augmented reality (AR), and mixed reality; MR). The aforementioned techniques can be applied in a variety of fields, such as gaming, military training, healthcare, teleworking, and the like.
在XR中,當使用者佩戴頭戴式顯示器(head-mounted display;HMD)時,使用者可使用他的/她的手做出手勢且進一步觸發特定功能。所述功能可與硬體或軟體控制相關。使用者使用他的/她的手來控制頭戴式顯示器系統是容易的。In XR, when a user wears a head-mounted display (HMD), the user can use his/her hands to make gestures and further trigger specific functions. The functions may be related to hardware or software control. It is easy for the user to control the head mounted display system using his/her hands.
部分手勢可能不夠直覺地來觸發功能。有鑑於此,本發明實施例提供一種XR中透過手勢的操作方法和頭戴式顯示器系統,以提供直覺手勢控制。Some gestures may not be intuitive enough to trigger functions. In view of this, embodiments of the present invention provide an operation method and a head-mounted display system through gestures in XR, so as to provide intuitive gesture control.
本發明實施例的XR中透過手勢的操作方法包含但不限於以下步驟。在第一影像中辨識第一手勢。第一手勢對應於使用者的手。反應於第一手勢的辨識結果,而呈現虛擬手和位於互動區域上的第一互動物件。虛擬手做出第一手勢。在第二影像中辨識第二手勢。第二手勢對應於使用者的手且與第一手勢不同。第二手勢與互動區域中的第一互動物件互動。反應於第一手勢和第二手勢的辨識結果,而在顯示器上呈現虛擬手和第二互動物件。虛擬手做出第二手勢。虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The operation method through gestures in XR according to the embodiment of the present invention includes but is not limited to the following steps. The first gesture is recognized in the first image. The first gesture corresponds to the user's hand. In response to the recognition result of the first gesture, the virtual hand and the first interactive object located on the interactive area are presented. The virtual hand makes the first gesture. The second gesture is recognized in the second image. The second gesture corresponds to the user's hand and is different from the first gesture. The second gesture interacts with the first interactive object in the interactive area. In response to the recognition results of the first gesture and the second gesture, the virtual hand and the second interactive object are displayed on the display. The virtual hand makes a second gesture. The number of virtual hands can be one or two. The virtual hand can be a full-body or half-body avatar's hand in XR.
本發明實施例的頭戴式顯示器系統包含但不限於影像感測器、顯示器以及處理器。影像感測器擷取影像。處理器耦接影像感測器和顯示器。處理器配置成進行以下步驟。處理器辨識由影像感測器擷取的第一影像中的第一手勢。第一手勢對應於使用者的手。處理器反應於第一手勢的辨識結果,而在顯示器上呈現虛擬手和位於互動區域上的第一互動物件。虛擬手做出第一手勢。處理器辨識由影像感測器擷取的第二影像中的第二手勢。第二手勢對應於使用者的手且與第一手勢不同,且第二手勢與互動區域中的第一互動物件互動。處理器反應於第一手勢和第二手勢的辨識結果,而在顯示器上呈現虛擬手和第二互動物件。虛擬手做出第二手勢。虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The head-mounted display system of the embodiment of the present invention includes, but is not limited to, an image sensor, a display, and a processor. The image sensor captures the image. The processor is coupled to the image sensor and the display. The processor is configured to perform the following steps. The processor recognizes the first gesture in the first image captured by the image sensor. The first gesture corresponds to the user's hand. In response to the recognition result of the first gesture, the processor presents the virtual hand and the first interactive object on the interactive area on the display. The virtual hand makes the first gesture. The processor recognizes the second gesture in the second image captured by the image sensor. The second gesture corresponds to the user's hand and is different from the first gesture, and the second gesture interacts with the first interactive object in the interaction area. The processor presents the virtual hand and the second interactive object on the display in response to the recognition results of the first gesture and the second gesture. The virtual hand makes a second gesture. The number of virtual hands can be one or two. The virtual hand can be a full-body or half-body avatar's hand in XR.
基於上述,依據本發明實施例的XR中透過手勢的操作方法和頭戴式顯示器系統,在兩張影像中辨識連續的兩個手勢,且手勢組合可觸發顯示器呈現不同互動物件。此外,提供互動物件以進一步與虛擬手互動。藉此,提供方便且有趣的方式控制頭戴式顯示器系統。Based on the above, according to the operation method and head-mounted display system in XR by gestures, two consecutive gestures are recognized in two images, and the combination of gestures can trigger the display to present different interactive objects. Additionally, interactive objects are provided to further interact with the virtual hand. Thereby, a convenient and fun way to control the head mounted display system is provided.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.
現將詳細參考本發明的優選實施例,其範例在附圖中示出。只要可能,相同元件符號在附圖和說明中用以代表相同或相似部分。Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals are used in the drawings and the description to refer to the same or like parts.
圖1是示出依據本發明的示範性實施例中的一者的頭戴式顯示器系統100的方塊圖。參考圖1,頭戴式顯示器(HMD)系統100包含但不限於記憶體110、顯示器120、影像感測器130以及處理器。HMD系統100適用於XR或其它實境模擬相關技術。1 is a block diagram illustrating a head mounted
記憶體110可以是任何類型的固定或可移動隨機存取記憶體(random-access memory;RAM)、唯讀記憶體(read-only memory;ROM)、快閃記憶體、類似裝置或以上裝置的組合。記憶體110記錄程式碼、裝置配置、緩衝器資料或永久資料(例如影像、手勢分類器、預定義手勢或設定),且稍後將介紹這些資料。The
顯示器120可以是LCD、LED顯示器或OLED顯示器。
影像感測器130可以是相機(例如單色相機或彩色相機)、深相機、錄影機或能夠擷取影像的其它影像感測器。The
處理器150耦接記憶體110、顯示器120以及影像感測器130。處理器150配置成載入儲存在記憶體110中的程式碼,以進行本發明的示範性實施例的程序。The
在一些實施例中,處理器150可以是中央處理單元(central processing unit;CPU)、微處理器、微控制器、圖形處理單元(graphics processing unit;GPU)、數位信號處理(digital signal processing;DSP)晶片、現場可程式設計閘陣列(field-programmable gate array;FPGA)。處理器150的功能也可由獨立電子裝置或積體電路(integrated circuit;IC)實施,且處理器150的操作也可透過軟體來實施。In some embodiments, the
在一個實施例中,HMD或數位眼鏡包含記憶體110、顯示器120、影像感測器130以及處理器150。在一些實施例中,處理器150可不與顯示器120和/或影像感測器130安裝在同一設備中。然而,分別配備有顯示器120、影像感測器130以及處理器150的設備可進一步包含具有相容通訊技術(例如藍芽(Bluetooth)、Wi-Fi以及IR無線通訊)的通訊收發器或物理傳輸線以彼此傳輸或接收資料。舉例來說,處理器150可安裝在HMD中,而影像感測器130安裝在HMD外部。對於另一範例,處理器150可安裝在計算裝置中,而顯示器120安裝在計算裝置外部。In one embodiment, the HMD or digital glasses includes a
為了更好地理解在本發明的一個或多個實施例中提供的操作過程,將在下文舉例說明若干實施例以詳細解釋頭戴式顯示器系統100。在以下實施例中應用系統100中的裝置和模組以解釋本文中提供的XR中透過手勢的操作方法。方法的每一步驟可依據實際實施情況調整且不應限於本文中所描述的內容。In order to better understand the operational procedures provided in one or more embodiments of the present invention, several embodiments will be exemplified below to explain the head mounted
圖2是示出依據本發明的示範性實施例中的一者的透過延伸實境(XR)中的手勢的操作方法的流程圖。參考圖2,處理器150可辨識由影像感測器130擷取的第一影像中的第一手勢(步驟S210)。具體來說,第一手勢是預定義手勢,例如手掌向上、手掌向下、揮手或握拳手勢。第一手勢對應於使用者的手。首先,處理器150可辨識影像中的手。接著,處理器150可辨識第一影像中的由使用者的手做出的手勢,且比較所辨識的手勢是否是預定義第一手勢。2 is a flowchart illustrating a method of operation through gestures in Extended Reality (XR) in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 2 , the
在一個實施例中,處理器150可從第一影像辨識使用者的手的關節,且透過手勢分類器基於第一影像和所辨識的使用者的手的關節來預測使用者的手的手勢。具體來說,手關節的位置與手勢有關。此外,影像中的輪廓、大小、紋理、形狀以及其它特徵與手勢有關。設計者可準備包含預定義手勢的大量影像作為訓練樣本,且使用訓練樣本透過配置有手勢辨識功能的機器學習演算法(例如深度學習、人工神經網路(artificial neural network;ANN)或支持向量機(support vector machine;SVM))來訓練手勢分類器。此外,在這些訓練樣本中辨識手關節,且手關節將是用以訓練相同手勢分類器或另一手勢分類器的另一訓練樣本。接著,訓練後的手勢分類器可用於確定在輸入影像中做出哪個手勢。In one embodiment, the
在一些實施例中,處理器150可僅基於第一影像且在沒有所辨識的使用者的手的關節的情況下預測手勢,且接著基於所辨識的使用者的手的關節來確認所預測的手勢。舉例來說,圖3是示出依據本發明的示範性實施例中的一者的手勢分類器的預測的示意圖。參考圖3,如果將包含手勢的影像OM輸入到手勢分類器中,那麼將從影像OM提取特徵(步驟S301,即,特徵提取)。舉例來說,在步驟S301中,處理器150對包含影像OM的像素值的濾波器與對應內核進行卷積計算,以輸出特徵圖。特徵可以是紋理、拐角、邊緣或形狀。接著,處理器150可對從步驟S301提取的特徵(例如特徵圖)進行分類(步驟S302,即,分類)。應注意,一個手勢分類器可配置有一個或多個標籤(即,在這個實施例中,一個或多個手勢)。手勢分類器可輸出所確定的手勢。In some embodiments, the
在僅基於影像OM確定一個或多個手勢之後,將把具有所辨識的手關節J的影像OM輸入到同一或另一手勢分類器中。類似地,處理器150可對具有所辨識的手關節J的影像OM進行特徵提取(步驟S301)和分類(步驟S302),以輸出所確定的手勢。使用隨後確定的手勢來檢查首先確定的手勢的正確性。舉例來說,如果兩個所確定的手勢相同,那麼處理器150可確認手勢。如果所確定的手勢不同,那麼處理器150可確定另一影像中的手勢。After determining one or more gestures based only on the image OM, the image OM with the identified hand joint J will be input into the same or another gesture classifier. Similarly, the
在一個實施例中,處理器150可進一步辨識使用者的右手和左手。這意味著處理器150知道哪隻手做出手勢或由影像感測器130擷取(即,手位於影像感測器130的視場(field of view;FOV)內)。在一些實施例中,處理器150可分別針對使用者的右手和左手定義不同預定義手勢或相同預定義手勢。舉例來說,一個功能將透過右手或左手的拇指向上手勢觸發。對於另一範例,另一功能將透過右手的食指向上手勢觸發,但同一功能將透過左手的小指向上手勢觸發。In one embodiment, the
應注意,仍存在大量手勢辨識演算法,例如,基於3D模型的演算法、基於骨骼的演算法、基於外觀的模型或基於肌電圖的模型。那些演算法可針對實際要求實施。It should be noted that a large number of gesture recognition algorithms still exist, eg, 3D model-based algorithms, bone-based algorithms, appearance-based models, or electromyography-based models. Those algorithms can be implemented for actual requirements.
處理器150可反應於第一手勢的辨識結果而在顯示器120上呈現虛擬手和位於互動區域上的第一互動物件(步驟S230)。具體來說,如果辨識結果是第一影像中的手勢與第一手勢相同,那麼對應於使用者的手的虛擬手將做出第一手勢。處理器150可在顯示器120上呈現做出第一手勢的虛擬手,使得使用者可知道他/她是否做出正確手勢。然而,如果第一影像的所辨識的手勢不是第一手勢,那麼處理器150仍可在顯示器120上呈現所辨識的手勢。此外,使用第一手勢來觸發顯示器120呈現第一互動物件。這意味著第一互動物件可能並不在顯示器120上呈現,直到使用者做出第一手勢。第一互動物件可以是影像、影片、虛擬球或其它虛擬物件。第一互動物件位於虛擬手或虛擬化身的手的互動區域上。這意味著虛擬手的手指、手掌或其它部分可能夠與位於互動區域中的任何物件互動。舉例來說,手指可觸摸虛擬按鍵,或手掌可抓握互動區域中的虛擬球。應注意,可基於實際要求來修改互動區域的形狀和方位。此外,虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The
在一個實施例中,使用第一互動物件來通知使用者可進行互動,且使使用者嘗試做另一手勢。也就是說,第一互動物件與後續手勢的提示相關。舉例來說,第一互動物件是虛擬球,且使用者可嘗試抓握或抓取虛擬球。In one embodiment, the first interactive object is used to notify the user that the interaction is available, and cause the user to attempt another gesture. That is to say, the first interactive object is related to the prompt of the subsequent gesture. For example, the first interactive object is a virtual ball, and the user can try to grasp or grab the virtual ball.
處理器150可辨識第二影像中的第二手勢(步驟S250)。具體來說,第二手勢是另一預定義手勢,例如手掌向上、手掌向下、交叉手指或握拳手勢,但與第一手勢不同。第二手勢也對應於使用者的手。處理器150可辨識第二影像中的由使用者的手做出的手勢,且比較所辨識的手勢是否是預定義第二手勢。The
在一個實施例中,如在步驟S210中所詳細提及,處理器150可從第二影像辨識使用者的手的關節,且透過手勢分類器基於第二影像和所辨識的使用者的手的關節來預測使用者的手的手勢。在一些實施例中,如在步驟S210中所詳細提及,處理器150可僅基於第二影像且在沒有所辨識的使用者的手的關節的情況下預測手勢,且接著基於所辨識的使用者的手的關節來確認所預測的手勢。In one embodiment, as mentioned in detail in step S210 , the
處理器150可反應於第二手勢的辨識結果而在顯示器120上呈現虛擬手和第二互動物件(步驟S270)。具體來說,如果辨識結果是第二影像中的手勢與第二手勢相同,那麼對應於使用者的虛擬手將做出第二手勢。處理器150可在顯示器120上呈現做出第二手勢的虛擬手,且具有第二手勢的手可與互動區域中的第一互動物件互動。舉例來說,虛擬手抓取虛擬球。在一些實施例中,可在顯示器120上呈現第一互動物件變形的動畫。舉例來說,擠壓虛擬球。然而,如果第二影像的所辨識的手勢不是第二手勢,那麼處理器150仍可在顯示器120上呈現所辨識的手勢。此外,第一互動物件可因為錯誤手勢而隱藏。The
此外,使用第一手勢與第二手勢的組合來觸發顯示器120呈現第二互動物件但隱藏第一互動物件。這意味著第二互動物件可能並不在顯示器120上呈現,直到使用者做出第一手勢且接著做出第二手勢。如果在已在第一影像中辨識到第一手勢之後在第二影像中辨識到與第二手勢不同的第三手勢,那麼仍在顯示器120上呈現第一互動物件且將不呈現第二互動物件。第二互動物件可以是影像、影片、選單或其它虛擬物件。另一方面,由於辨識到第二手勢,所以不必呈現第一互動物件(其是第二手勢的提示)。因此,第一互動物件可説明使用者直覺地進行第一手勢與第二手勢的組合。Furthermore, the combination of the first gesture and the second gesture is used to trigger the
舉例來說,圖4A和圖4B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖4A,在第一時間點在第一影像中辨識到左手的手掌向上手勢,其定義為第一手勢。將在顯示器120上呈現具有手掌向上手勢的虛擬左手LH和虛擬球io1(即,第一互動物件)。參考圖4B,在第二時間點在第二影像中辨識到左手的握拳手勢,其定義為第二手勢。將在顯示器120上呈現具有握拳手勢的虛擬左手LH和主選單io2(即,第二互動物件)。主選單io2包含多個圖示,例如用於好友清單、地圖以及應用商店的圖示。For example, FIGS. 4A and 4B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 4A , a palm-up gesture of the left hand is recognized in the first image at a first time point, which is defined as a first gesture. A virtual left hand LH with a palm up gesture and a virtual ball io1 (ie, the first interactive object) will be presented on the
在一個實施例中,第二互動物件包含第一選單和第二選單。第二選單與第一選單不同。如果辨識到右手,那麼處理器150可在顯示器120上呈現第一選單,且如果辨識到左手,那麼在顯示器120上呈現第二選單。這意味著如果由右手做出第一手勢與第二手勢的組合,那麼將在顯示器120上呈現第一選單。然而,如果由左手做出第一手勢與第二手勢的組合,那麼將在顯示器120上呈現第二選單。In one embodiment, the second interactive object includes a first menu and a second menu. The second menu is different from the first menu. The
舉例來說,第二選單是如圖4B中所繪示的主選單io2。圖5A和圖5B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖5A,在第三時間點在第一影像中辨識到右手的手掌向上手勢,其定義為第一手勢。將在顯示器120上呈現具有手掌向上手勢的虛擬右手RH和虛擬球io3(即,第一互動物件)。參考圖5B,在第四時間點在第二影像中辨識到左手的握拳手勢,其定義為第二手勢。將在顯示器120上呈現具有握拳手勢的虛擬右手RH和快速設定選單io4(即,第二互動物件或第一選單)。快速設定選單io4包含多個圖示,例如用於打開/關閉相機、在虛擬手上進行特定運動以及消息傳送的圖示。For example, the second menu is the main menu io2 as shown in FIG. 4B . 5A and 5B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 5A , a palm-up gesture of the right hand is recognized in the first image at a third time point, which is defined as a first gesture. A virtual right hand RH with a palm-up gesture and a virtual ball io3 (ie, the first interactive object) will be presented on the
在一個實施例中,如果偵測到第二手勢,那麼處理器150可進一步在顯示器120上隱藏第一互動物件。這意味著不需要另外指示後續手勢,且第一互動物件將不可見。因此,僅在顯示器120上呈現第二互動物件。以圖5A和圖5B作為範例,在辨識到握拳手勢之後,隱藏虛擬球io3。In one embodiment, the
在另一實施例中,如果已在顯示器120上呈現第二互動物件且確認第一手勢和第二手勢的辨識結果(即,由使用者做出第一手勢與第二手勢的組合),那麼處理器150可在顯示器120上隱藏第一互動物件和第二互動物件。因此,可透過手勢來關閉選單。In another embodiment, if the second interactive object has been presented on the
舉例來說,圖6A和圖6B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖6A,已在顯示器120上呈現快速設定選單io4。由於右手RH的手掌向上手勢而呈現虛擬球io3。參考圖6B,由於右手RH的握拳手勢而隱藏虛擬球io3和快速設定選單io4兩者。For example, FIGS. 6A and 6B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 6A , a quick settings menu io4 has been presented on the
應注意,可基於實際要求來修改圖4A到圖6B中的第一互動物件、第二互動物件、第一手勢以及第二手勢,且實施例不限於此。It should be noted that the first interactive object, the second interactive object, the first gesture, and the second gesture in FIGS. 4A to 6B may be modified based on actual requirements, and the embodiment is not limited thereto.
綜上所述,在本發明實施例的XR中透過手勢的操作方法和頭戴式顯示器系統中,在兩張影像中辨識手勢組合,且手勢組合是用於在顯示器中呈現第二互動物件。此外,辨識第一手勢之後,可顯示第一互動物件,以進一步提示做出第二手勢。藉此,可提供直覺的手勢控制。To sum up, in the operation method by gestures in XR and the head-mounted display system of the embodiments of the present invention, the gesture combination is recognized in the two images, and the gesture combination is used to present the second interactive object on the display. In addition, after the first gesture is recognized, the first interactive object may be displayed to further prompt to make the second gesture. Thereby, intuitive gesture control can be provided.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.
100:頭戴式顯示器系統 110:記憶體 120:顯示器 130:影像感測器 150:處理器 io1、io3:虛擬球 io2:主選單 io4:快速設定選單 J:所辨識的手關節 LH:虛擬左手 OM:影像 RH:虛擬右手 S210、S230、S250、S270、S301、S302:步驟 100: Head Mounted Display Systems 110: Memory 120: Display 130: Image sensor 150: Processor io1, io3: virtual ball io2: main menu io4: Quick Setup Menu J: Identified hand joints LH: virtual left hand OM: Video RH: virtual right hand S210, S230, S250, S270, S301, S302: Steps
圖1是示出依據本發明的示範性實施例中的一者的頭戴式顯示器系統的方塊圖。 圖2是示出依據本發明的示範性實施例中的一者的透過延伸實境(XR)中的手勢的操作方法的流程圖。 圖3是示出依據本發明的示範性實施例中的一者的手勢分類器的預測的示意圖。 圖4A和圖4B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 圖5A和圖5B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 圖6A和圖6B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 1 is a block diagram illustrating a head mounted display system in accordance with one of the exemplary embodiments of the present invention. 2 is a flowchart illustrating a method of operation through gestures in Extended Reality (XR) in accordance with one of the exemplary embodiments of the present invention. 3 is a schematic diagram illustrating predictions of a gesture classifier in accordance with one of the exemplary embodiments of the present invention. 4A and 4B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. 5A and 5B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. 6A and 6B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention.
S210~S270:步驟 S210~S270: Steps
Claims (16)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202017103965A | 2020-11-25 | 2020-11-25 | |
US17/103,965 | 2020-11-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
TW202221474A true TW202221474A (en) | 2022-06-01 |
Family
ID=81668113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109144823A TW202221474A (en) | 2020-11-25 | 2020-12-18 | Operating method by gestures in extended reality and head-mounted display system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114546103A (en) |
TW (1) | TW202221474A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117311486A (en) * | 2022-06-22 | 2023-12-29 | 京东方科技集团股份有限公司 | Interaction method and device for light field display and light field display system |
-
2020
- 2020-12-18 TW TW109144823A patent/TW202221474A/en unknown
- 2020-12-21 CN CN202011523777.8A patent/CN114546103A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN114546103A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11587297B2 (en) | Virtual content generation | |
JP7137804B2 (en) | Method and system for gesture-based interaction | |
US11048333B2 (en) | System and method for close-range movement tracking | |
KR101844390B1 (en) | Systems and techniques for user interface control | |
JP2019535055A (en) | Perform gesture-based operations | |
US20140240225A1 (en) | Method for touchless control of a device | |
CN110622219B (en) | Interactive augmented reality | |
US20220066569A1 (en) | Object interaction method and system, and computer-readable medium | |
US10168790B2 (en) | Method and device for enabling virtual reality interaction with gesture control | |
US11054896B1 (en) | Displaying virtual interaction objects to a user on a reference plane | |
TW202221474A (en) | Operating method by gestures in extended reality and head-mounted display system | |
CN113168221A (en) | Information processing apparatus, information processing method, and program | |
CN114360047A (en) | Hand-lifting gesture recognition method and device, electronic equipment and storage medium | |
JP2016099643A (en) | Image processing device, image processing method, and image processing program | |
US11500453B2 (en) | Information processing apparatus | |
JP2022092745A (en) | Operation method using gesture in extended reality and head-mounted display system | |
EP4009143A1 (en) | Operating method by gestures in extended reality and head-mounted display system | |
US11782548B1 (en) | Speed adapted touch detection | |
TWI696092B (en) | Head mounted display system capable of creating a virtual object in a virtual environment according to a real object in a real environment and assigning a predetermined interactive characteristic to the virtual object, related method and related computer readable storage medium | |
US11054941B2 (en) | Information processing system, information processing method, and program for correcting operation direction and operation amount | |
US20230061557A1 (en) | Electronic device and program | |
Vidal Jr et al. | Extending Smartphone-Based Hand Gesture Recognition for Augmented Reality Applications with Two-Finger-Pinch and Thumb-Orientation Gestures | |
CN116166161A (en) | Interaction method based on multi-level menu and related equipment | |
CN116129518A (en) | Somatosensory operation method based on gesture recognition | |
Yang et al. | Around-device finger input on commodity smartwatches with learning guidance through discoverability |