TW202221474A - Operating method by gestures in extended reality and head-mounted display system - Google Patents

Operating method by gestures in extended reality and head-mounted display system Download PDF

Info

Publication number
TW202221474A
TW202221474A TW109144823A TW109144823A TW202221474A TW 202221474 A TW202221474 A TW 202221474A TW 109144823 A TW109144823 A TW 109144823A TW 109144823 A TW109144823 A TW 109144823A TW 202221474 A TW202221474 A TW 202221474A
Authority
TW
Taiwan
Prior art keywords
gesture
hand
menu
image
user
Prior art date
Application number
TW109144823A
Other languages
Chinese (zh)
Inventor
郭勝修
Original Assignee
未來市股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 未來市股份有限公司 filed Critical 未來市股份有限公司
Publication of TW202221474A publication Critical patent/TW202221474A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An operating method by gestures in extended reality (XR) and a head-mounted display (HMD) system are provided. In the method, a first gesture is identified in a first image. The first gesture corresponds to a user’s hand. A virtual hand and a first interactive object located on an interactive area are presented in response to an identified result of the first gesture. The virtual hand makes the first gesture. A second gesture is identified in a second image. The second gesture corresponds to the user’s hand and is different from the first gesture. The second gesture interacts with the first interactive object in the interactive area. The virtual hand and a second interactive object are presented on the display in response to an identified result of the second gesture. The virtual hand makes the second gesture. Accordingly, intuitive gesture control is provided.

Description

延伸實境中透過手勢的操作方法和頭戴式顯示器系統Gesture-based operation method and head-mounted display system in extended reality

本發明是有關於一種虛擬模擬,且特別是有關於一種延伸實境(extended reality;XR)中透過手勢的操作方法和頭戴式顯示器系統。The present invention relates to a virtual simulation, and more particularly, to an operation method and a head-mounted display system through gestures in extended reality (XR).

現今流行用於模擬感覺、感知和/或環境的延伸實境(XR)技術,例如虛擬實境(virtual reality;VR)、擴增實境(augmented reality;AR)以及混合實境(mixed reality;MR)。前述技術可應用於多個領域中,例如遊戲、軍事訓練、醫療保健、遠端工作等。Extended reality (XR) technologies for simulating sensations, perceptions and/or environments are popular today, such as virtual reality (VR), augmented reality (AR), and mixed reality; MR). The aforementioned techniques can be applied in a variety of fields, such as gaming, military training, healthcare, teleworking, and the like.

在XR中,當使用者佩戴頭戴式顯示器(head-mounted display;HMD)時,使用者可使用他的/她的手做出手勢且進一步觸發特定功能。所述功能可與硬體或軟體控制相關。使用者使用他的/她的手來控制頭戴式顯示器系統是容易的。In XR, when a user wears a head-mounted display (HMD), the user can use his/her hands to make gestures and further trigger specific functions. The functions may be related to hardware or software control. It is easy for the user to control the head mounted display system using his/her hands.

部分手勢可能不夠直覺地來觸發功能。有鑑於此,本發明實施例提供一種XR中透過手勢的操作方法和頭戴式顯示器系統,以提供直覺手勢控制。Some gestures may not be intuitive enough to trigger functions. In view of this, embodiments of the present invention provide an operation method and a head-mounted display system through gestures in XR, so as to provide intuitive gesture control.

本發明實施例的XR中透過手勢的操作方法包含但不限於以下步驟。在第一影像中辨識第一手勢。第一手勢對應於使用者的手。反應於第一手勢的辨識結果,而呈現虛擬手和位於互動區域上的第一互動物件。虛擬手做出第一手勢。在第二影像中辨識第二手勢。第二手勢對應於使用者的手且與第一手勢不同。第二手勢與互動區域中的第一互動物件互動。反應於第一手勢和第二手勢的辨識結果,而在顯示器上呈現虛擬手和第二互動物件。虛擬手做出第二手勢。虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The operation method through gestures in XR according to the embodiment of the present invention includes but is not limited to the following steps. The first gesture is recognized in the first image. The first gesture corresponds to the user's hand. In response to the recognition result of the first gesture, the virtual hand and the first interactive object located on the interactive area are presented. The virtual hand makes the first gesture. The second gesture is recognized in the second image. The second gesture corresponds to the user's hand and is different from the first gesture. The second gesture interacts with the first interactive object in the interactive area. In response to the recognition results of the first gesture and the second gesture, the virtual hand and the second interactive object are displayed on the display. The virtual hand makes a second gesture. The number of virtual hands can be one or two. The virtual hand can be a full-body or half-body avatar's hand in XR.

本發明實施例的頭戴式顯示器系統包含但不限於影像感測器、顯示器以及處理器。影像感測器擷取影像。處理器耦接影像感測器和顯示器。處理器配置成進行以下步驟。處理器辨識由影像感測器擷取的第一影像中的第一手勢。第一手勢對應於使用者的手。處理器反應於第一手勢的辨識結果,而在顯示器上呈現虛擬手和位於互動區域上的第一互動物件。虛擬手做出第一手勢。處理器辨識由影像感測器擷取的第二影像中的第二手勢。第二手勢對應於使用者的手且與第一手勢不同,且第二手勢與互動區域中的第一互動物件互動。處理器反應於第一手勢和第二手勢的辨識結果,而在顯示器上呈現虛擬手和第二互動物件。虛擬手做出第二手勢。虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The head-mounted display system of the embodiment of the present invention includes, but is not limited to, an image sensor, a display, and a processor. The image sensor captures the image. The processor is coupled to the image sensor and the display. The processor is configured to perform the following steps. The processor recognizes the first gesture in the first image captured by the image sensor. The first gesture corresponds to the user's hand. In response to the recognition result of the first gesture, the processor presents the virtual hand and the first interactive object on the interactive area on the display. The virtual hand makes the first gesture. The processor recognizes the second gesture in the second image captured by the image sensor. The second gesture corresponds to the user's hand and is different from the first gesture, and the second gesture interacts with the first interactive object in the interaction area. The processor presents the virtual hand and the second interactive object on the display in response to the recognition results of the first gesture and the second gesture. The virtual hand makes a second gesture. The number of virtual hands can be one or two. The virtual hand can be a full-body or half-body avatar's hand in XR.

基於上述,依據本發明實施例的XR中透過手勢的操作方法和頭戴式顯示器系統,在兩張影像中辨識連續的兩個手勢,且手勢組合可觸發顯示器呈現不同互動物件。此外,提供互動物件以進一步與虛擬手互動。藉此,提供方便且有趣的方式控制頭戴式顯示器系統。Based on the above, according to the operation method and head-mounted display system in XR by gestures, two consecutive gestures are recognized in two images, and the combination of gestures can trigger the display to present different interactive objects. Additionally, interactive objects are provided to further interact with the virtual hand. Thereby, a convenient and fun way to control the head mounted display system is provided.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.

現將詳細參考本發明的優選實施例,其範例在附圖中示出。只要可能,相同元件符號在附圖和說明中用以代表相同或相似部分。Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals are used in the drawings and the description to refer to the same or like parts.

圖1是示出依據本發明的示範性實施例中的一者的頭戴式顯示器系統100的方塊圖。參考圖1,頭戴式顯示器(HMD)系統100包含但不限於記憶體110、顯示器120、影像感測器130以及處理器。HMD系統100適用於XR或其它實境模擬相關技術。1 is a block diagram illustrating a head mounted display system 100 in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 1 , a head mounted display (HMD) system 100 includes, but is not limited to, a memory 110 , a display 120 , an image sensor 130 and a processor. The HMD system 100 is suitable for use with XR or other reality simulation related technologies.

記憶體110可以是任何類型的固定或可移動隨機存取記憶體(random-access memory;RAM)、唯讀記憶體(read-only memory;ROM)、快閃記憶體、類似裝置或以上裝置的組合。記憶體110記錄程式碼、裝置配置、緩衝器資料或永久資料(例如影像、手勢分類器、預定義手勢或設定),且稍後將介紹這些資料。The memory 110 may be any type of fixed or removable random-access memory (RAM), read-only memory (ROM), flash memory, similar devices, or the above. combination. Memory 110 records code, device configuration, buffer data, or persistent data (eg, images, gesture classifiers, predefined gestures, or settings), which will be described later.

顯示器120可以是LCD、LED顯示器或OLED顯示器。Display 120 may be an LCD, LED display or OLED display.

影像感測器130可以是相機(例如單色相機或彩色相機)、深相機、錄影機或能夠擷取影像的其它影像感測器。The image sensor 130 may be a camera (eg, a monochrome camera or a color camera), a depth camera, a video recorder, or other image sensor capable of capturing images.

處理器150耦接記憶體110、顯示器120以及影像感測器130。處理器150配置成載入儲存在記憶體110中的程式碼,以進行本發明的示範性實施例的程序。The processor 150 is coupled to the memory 110 , the display 120 and the image sensor 130 . The processor 150 is configured to load the program code stored in the memory 110 to perform the procedures of the exemplary embodiment of the present invention.

在一些實施例中,處理器150可以是中央處理單元(central processing unit;CPU)、微處理器、微控制器、圖形處理單元(graphics processing unit;GPU)、數位信號處理(digital signal processing;DSP)晶片、現場可程式設計閘陣列(field-programmable gate array;FPGA)。處理器150的功能也可由獨立電子裝置或積體電路(integrated circuit;IC)實施,且處理器150的操作也可透過軟體來實施。In some embodiments, the processor 150 may be a central processing unit (CPU), microprocessor, microcontroller, graphics processing unit (GPU), digital signal processing (DSP) ) chip, field programmable gate array (field-programmable gate array; FPGA). The functions of the processor 150 may also be implemented by an independent electronic device or an integrated circuit (IC), and the operations of the processor 150 may also be implemented by software.

在一個實施例中,HMD或數位眼鏡包含記憶體110、顯示器120、影像感測器130以及處理器150。在一些實施例中,處理器150可不與顯示器120和/或影像感測器130安裝在同一設備中。然而,分別配備有顯示器120、影像感測器130以及處理器150的設備可進一步包含具有相容通訊技術(例如藍芽(Bluetooth)、Wi-Fi以及IR無線通訊)的通訊收發器或物理傳輸線以彼此傳輸或接收資料。舉例來說,處理器150可安裝在HMD中,而影像感測器130安裝在HMD外部。對於另一範例,處理器150可安裝在計算裝置中,而顯示器120安裝在計算裝置外部。In one embodiment, the HMD or digital glasses includes a memory 110 , a display 120 , an image sensor 130 and a processor 150 . In some embodiments, processor 150 may not be installed in the same device as display 120 and/or image sensor 130 . However, the devices equipped with the display 120, the image sensor 130, and the processor 150, respectively, may further include a communication transceiver or physical transmission line with compatible communication technologies such as Bluetooth, Wi-Fi, and IR wireless communication to transmit or receive data from each other. For example, the processor 150 may be installed in the HMD, while the image sensor 130 is installed outside the HMD. For another example, the processor 150 may be installed in the computing device, while the display 120 is installed outside the computing device.

為了更好地理解在本發明的一個或多個實施例中提供的操作過程,將在下文舉例說明若干實施例以詳細解釋頭戴式顯示器系統100。在以下實施例中應用系統100中的裝置和模組以解釋本文中提供的XR中透過手勢的操作方法。方法的每一步驟可依據實際實施情況調整且不應限於本文中所描述的內容。In order to better understand the operational procedures provided in one or more embodiments of the present invention, several embodiments will be exemplified below to explain the head mounted display system 100 in detail. The devices and modules in the system 100 are applied in the following embodiments to explain the operation methods provided herein through gestures in XR. Each step of the method can be adjusted according to the actual implementation and should not be limited to what is described herein.

圖2是示出依據本發明的示範性實施例中的一者的透過延伸實境(XR)中的手勢的操作方法的流程圖。參考圖2,處理器150可辨識由影像感測器130擷取的第一影像中的第一手勢(步驟S210)。具體來說,第一手勢是預定義手勢,例如手掌向上、手掌向下、揮手或握拳手勢。第一手勢對應於使用者的手。首先,處理器150可辨識影像中的手。接著,處理器150可辨識第一影像中的由使用者的手做出的手勢,且比較所辨識的手勢是否是預定義第一手勢。2 is a flowchart illustrating a method of operation through gestures in Extended Reality (XR) in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 2 , the processor 150 may recognize the first gesture in the first image captured by the image sensor 130 (step S210 ). Specifically, the first gesture is a predefined gesture, such as a palm up, palm down, waving or fist gesture. The first gesture corresponds to the user's hand. First, the processor 150 can recognize the hand in the image. Then, the processor 150 may recognize the gesture made by the user's hand in the first image, and compare whether the recognized gesture is a predefined first gesture.

在一個實施例中,處理器150可從第一影像辨識使用者的手的關節,且透過手勢分類器基於第一影像和所辨識的使用者的手的關節來預測使用者的手的手勢。具體來說,手關節的位置與手勢有關。此外,影像中的輪廓、大小、紋理、形狀以及其它特徵與手勢有關。設計者可準備包含預定義手勢的大量影像作為訓練樣本,且使用訓練樣本透過配置有手勢辨識功能的機器學習演算法(例如深度學習、人工神經網路(artificial neural network;ANN)或支持向量機(support vector machine;SVM))來訓練手勢分類器。此外,在這些訓練樣本中辨識手關節,且手關節將是用以訓練相同手勢分類器或另一手勢分類器的另一訓練樣本。接著,訓練後的手勢分類器可用於確定在輸入影像中做出哪個手勢。In one embodiment, the processor 150 may identify the joints of the user's hand from the first image, and predict the gesture of the user's hand based on the first image and the identified joints of the user's hand through the gesture classifier. Specifically, the positions of the hand joints are related to gestures. In addition, outlines, sizes, textures, shapes, and other features in images are associated with gestures. Designers can prepare a large number of images containing predefined gestures as training samples, and use the training samples through a machine learning algorithm (such as deep learning, artificial neural network (ANN) or support vector machine configured with gesture recognition function) (support vector machine; SVM)) to train a gesture classifier. Furthermore, hand joints are identified in these training samples and would be another training sample used to train the same gesture classifier or another gesture classifier. The trained gesture classifier can then be used to determine which gesture was made in the input image.

在一些實施例中,處理器150可僅基於第一影像且在沒有所辨識的使用者的手的關節的情況下預測手勢,且接著基於所辨識的使用者的手的關節來確認所預測的手勢。舉例來說,圖3是示出依據本發明的示範性實施例中的一者的手勢分類器的預測的示意圖。參考圖3,如果將包含手勢的影像OM輸入到手勢分類器中,那麼將從影像OM提取特徵(步驟S301,即,特徵提取)。舉例來說,在步驟S301中,處理器150對包含影像OM的像素值的濾波器與對應內核進行卷積計算,以輸出特徵圖。特徵可以是紋理、拐角、邊緣或形狀。接著,處理器150可對從步驟S301提取的特徵(例如特徵圖)進行分類(步驟S302,即,分類)。應注意,一個手勢分類器可配置有一個或多個標籤(即,在這個實施例中,一個或多個手勢)。手勢分類器可輸出所確定的手勢。In some embodiments, the processor 150 may predict the gesture based only on the first image and without the identified joints of the user's hand, and then confirm the predicted based on the identified joints of the user's hand gesture. For example, FIG. 3 is a schematic diagram illustrating predictions of a gesture classifier in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 3 , if an image OM containing a gesture is input into the gesture classifier, features will be extracted from the image OM (step S301 , ie, feature extraction). For example, in step S301, the processor 150 performs a convolution calculation on the filter including the pixel value of the image OM and the corresponding kernel to output a feature map. Features can be textures, corners, edges, or shapes. Next, the processor 150 may classify the features (eg, feature maps) extracted from step S301 (step S302 , ie, classify). It should be noted that a gesture classifier may be configured with one or more labels (ie, in this embodiment, one or more gestures). The gesture classifier may output the determined gesture.

在僅基於影像OM確定一個或多個手勢之後,將把具有所辨識的手關節J的影像OM輸入到同一或另一手勢分類器中。類似地,處理器150可對具有所辨識的手關節J的影像OM進行特徵提取(步驟S301)和分類(步驟S302),以輸出所確定的手勢。使用隨後確定的手勢來檢查首先確定的手勢的正確性。舉例來說,如果兩個所確定的手勢相同,那麼處理器150可確認手勢。如果所確定的手勢不同,那麼處理器150可確定另一影像中的手勢。After determining one or more gestures based only on the image OM, the image OM with the identified hand joint J will be input into the same or another gesture classifier. Similarly, the processor 150 may perform feature extraction (step S301 ) and classification (step S302 ) on the image OM with the identified hand joint J to output the determined gesture. The gestures determined first are used to check the correctness of the gestures determined first. For example, if the two determined gestures are the same, the processor 150 may confirm the gesture. If the determined gesture is different, the processor 150 may determine the gesture in another image.

在一個實施例中,處理器150可進一步辨識使用者的右手和左手。這意味著處理器150知道哪隻手做出手勢或由影像感測器130擷取(即,手位於影像感測器130的視場(field of view;FOV)內)。在一些實施例中,處理器150可分別針對使用者的右手和左手定義不同預定義手勢或相同預定義手勢。舉例來說,一個功能將透過右手或左手的拇指向上手勢觸發。對於另一範例,另一功能將透過右手的食指向上手勢觸發,但同一功能將透過左手的小指向上手勢觸發。In one embodiment, the processor 150 may further identify the user's right and left hands. This means that the processor 150 knows which hand is making the gesture or captured by the image sensor 130 (ie, the hand is within the field of view (FOV) of the image sensor 130). In some embodiments, the processor 150 may define different predefined gestures or the same predefined gesture for the user's right and left hands, respectively. For example, a function would be triggered by a right or left thumb up gesture. For another example, another function will be triggered by the index finger up gesture of the right hand, but the same function will be triggered by the little finger up gesture of the left hand.

應注意,仍存在大量手勢辨識演算法,例如,基於3D模型的演算法、基於骨骼的演算法、基於外觀的模型或基於肌電圖的模型。那些演算法可針對實際要求實施。It should be noted that a large number of gesture recognition algorithms still exist, eg, 3D model-based algorithms, bone-based algorithms, appearance-based models, or electromyography-based models. Those algorithms can be implemented for actual requirements.

處理器150可反應於第一手勢的辨識結果而在顯示器120上呈現虛擬手和位於互動區域上的第一互動物件(步驟S230)。具體來說,如果辨識結果是第一影像中的手勢與第一手勢相同,那麼對應於使用者的手的虛擬手將做出第一手勢。處理器150可在顯示器120上呈現做出第一手勢的虛擬手,使得使用者可知道他/她是否做出正確手勢。然而,如果第一影像的所辨識的手勢不是第一手勢,那麼處理器150仍可在顯示器120上呈現所辨識的手勢。此外,使用第一手勢來觸發顯示器120呈現第一互動物件。這意味著第一互動物件可能並不在顯示器120上呈現,直到使用者做出第一手勢。第一互動物件可以是影像、影片、虛擬球或其它虛擬物件。第一互動物件位於虛擬手或虛擬化身的手的互動區域上。這意味著虛擬手的手指、手掌或其它部分可能夠與位於互動區域中的任何物件互動。舉例來說,手指可觸摸虛擬按鍵,或手掌可抓握互動區域中的虛擬球。應注意,可基於實際要求來修改互動區域的形狀和方位。此外,虛擬手的數目可以是一個或兩個。虛擬手可以是XR中的全身或半身虛擬化身的手。The processor 150 may present the virtual hand and the first interactive object on the interactive area on the display 120 in response to the recognition result of the first gesture (step S230 ). Specifically, if the recognition result is that the gesture in the first image is the same as the first gesture, the virtual hand corresponding to the user's hand will make the first gesture. The processor 150 may present the virtual hand making the first gesture on the display 120 so that the user may know whether he/she made the correct gesture. However, if the recognized gesture of the first image is not the first gesture, the processor 150 may still present the recognized gesture on the display 120 . In addition, the first gesture is used to trigger the display 120 to present the first interactive object. This means that the first interactive object may not be presented on the display 120 until the user makes the first gesture. The first interactive object may be an image, a video, a virtual ball or other virtual objects. The first interactive object is located on the interactive area of the virtual hand or the hand of the avatar. This means that the fingers, palm or other parts of the virtual hand may be able to interact with any object located in the interaction area. For example, a finger can touch a virtual key, or a palm can grasp a virtual ball in the interactive area. It should be noted that the shape and orientation of the interactive area can be modified based on actual requirements. Also, the number of virtual hands may be one or two. The virtual hand can be a full-body or half-body avatar's hand in XR.

在一個實施例中,使用第一互動物件來通知使用者可進行互動,且使使用者嘗試做另一手勢。也就是說,第一互動物件與後續手勢的提示相關。舉例來說,第一互動物件是虛擬球,且使用者可嘗試抓握或抓取虛擬球。In one embodiment, the first interactive object is used to notify the user that the interaction is available, and cause the user to attempt another gesture. That is to say, the first interactive object is related to the prompt of the subsequent gesture. For example, the first interactive object is a virtual ball, and the user can try to grasp or grab the virtual ball.

處理器150可辨識第二影像中的第二手勢(步驟S250)。具體來說,第二手勢是另一預定義手勢,例如手掌向上、手掌向下、交叉手指或握拳手勢,但與第一手勢不同。第二手勢也對應於使用者的手。處理器150可辨識第二影像中的由使用者的手做出的手勢,且比較所辨識的手勢是否是預定義第二手勢。The processor 150 may recognize the second gesture in the second image (step S250). Specifically, the second gesture is another predefined gesture, such as a palm up, palm down, interdigitated or fist gesture, but is different from the first gesture. The second gesture also corresponds to the user's hand. The processor 150 may recognize the gesture made by the user's hand in the second image, and compare whether the recognized gesture is a predefined second gesture.

在一個實施例中,如在步驟S210中所詳細提及,處理器150可從第二影像辨識使用者的手的關節,且透過手勢分類器基於第二影像和所辨識的使用者的手的關節來預測使用者的手的手勢。在一些實施例中,如在步驟S210中所詳細提及,處理器150可僅基於第二影像且在沒有所辨識的使用者的手的關節的情況下預測手勢,且接著基於所辨識的使用者的手的關節來確認所預測的手勢。In one embodiment, as mentioned in detail in step S210 , the processor 150 may recognize the joints of the user's hand from the second image, and use a gesture classifier based on the second image and the recognized motion of the user's hand. joints to predict the gestures of the user's hand. In some embodiments, as mentioned in detail in step S210, the processor 150 may predict the gesture based only on the second image and without the identified joints of the user's hand, and then based on the identified usage the joints of the user's hand to confirm the predicted gesture.

處理器150可反應於第二手勢的辨識結果而在顯示器120上呈現虛擬手和第二互動物件(步驟S270)。具體來說,如果辨識結果是第二影像中的手勢與第二手勢相同,那麼對應於使用者的虛擬手將做出第二手勢。處理器150可在顯示器120上呈現做出第二手勢的虛擬手,且具有第二手勢的手可與互動區域中的第一互動物件互動。舉例來說,虛擬手抓取虛擬球。在一些實施例中,可在顯示器120上呈現第一互動物件變形的動畫。舉例來說,擠壓虛擬球。然而,如果第二影像的所辨識的手勢不是第二手勢,那麼處理器150仍可在顯示器120上呈現所辨識的手勢。此外,第一互動物件可因為錯誤手勢而隱藏。The processor 150 may present the virtual hand and the second interactive object on the display 120 in response to the recognition result of the second gesture (step S270 ). Specifically, if the recognition result is that the gesture in the second image is the same as the second gesture, the virtual hand corresponding to the user will make the second gesture. The processor 150 can present the virtual hand making the second gesture on the display 120, and the hand with the second gesture can interact with the first interactive object in the interactive area. For example, a virtual hand grabs a virtual ball. In some embodiments, an animation of the deformation of the first interactive object may be presented on the display 120 . For example, squeeze the virtual ball. However, if the recognized gesture of the second image is not the second gesture, the processor 150 may still present the recognized gesture on the display 120 . In addition, the first interactive object can be hidden due to wrong gestures.

此外,使用第一手勢與第二手勢的組合來觸發顯示器120呈現第二互動物件但隱藏第一互動物件。這意味著第二互動物件可能並不在顯示器120上呈現,直到使用者做出第一手勢且接著做出第二手勢。如果在已在第一影像中辨識到第一手勢之後在第二影像中辨識到與第二手勢不同的第三手勢,那麼仍在顯示器120上呈現第一互動物件且將不呈現第二互動物件。第二互動物件可以是影像、影片、選單或其它虛擬物件。另一方面,由於辨識到第二手勢,所以不必呈現第一互動物件(其是第二手勢的提示)。因此,第一互動物件可説明使用者直覺地進行第一手勢與第二手勢的組合。Furthermore, the combination of the first gesture and the second gesture is used to trigger the display 120 to present the second interactive object but hide the first interactive object. This means that the second interactive object may not be presented on the display 120 until the user makes the first gesture and then the second gesture. If a third gesture different from the second gesture is recognized in the second image after the first gesture has been recognized in the first image, the first interactive object is still presented on the display 120 and the second interaction will not be presented object. The second interactive object may be an image, a video, a menu or other virtual objects. On the other hand, since the second gesture is recognized, it is not necessary to present the first interactive object (which is a prompt for the second gesture). Therefore, the first interactive object can indicate that the user intuitively performs the combination of the first gesture and the second gesture.

舉例來說,圖4A和圖4B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖4A,在第一時間點在第一影像中辨識到左手的手掌向上手勢,其定義為第一手勢。將在顯示器120上呈現具有手掌向上手勢的虛擬左手LH和虛擬球io1(即,第一互動物件)。參考圖4B,在第二時間點在第二影像中辨識到左手的握拳手勢,其定義為第二手勢。將在顯示器120上呈現具有握拳手勢的虛擬左手LH和主選單io2(即,第二互動物件)。主選單io2包含多個圖示,例如用於好友清單、地圖以及應用商店的圖示。For example, FIGS. 4A and 4B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 4A , a palm-up gesture of the left hand is recognized in the first image at a first time point, which is defined as a first gesture. A virtual left hand LH with a palm up gesture and a virtual ball io1 (ie, the first interactive object) will be presented on the display 120 . Referring to FIG. 4B , the fist gesture of the left hand is recognized in the second image at the second time point, which is defined as the second gesture. The virtual left hand LH with the fist gesture and the main menu io2 (ie, the second interactive object) will be presented on the display 120 . The main menu io2 contains several icons, such as those for the friends list, maps, and the app store.

在一個實施例中,第二互動物件包含第一選單和第二選單。第二選單與第一選單不同。如果辨識到右手,那麼處理器150可在顯示器120上呈現第一選單,且如果辨識到左手,那麼在顯示器120上呈現第二選單。這意味著如果由右手做出第一手勢與第二手勢的組合,那麼將在顯示器120上呈現第一選單。然而,如果由左手做出第一手勢與第二手勢的組合,那麼將在顯示器120上呈現第二選單。In one embodiment, the second interactive object includes a first menu and a second menu. The second menu is different from the first menu. The processor 150 may present a first menu on the display 120 if a right hand is recognized, and a second menu on the display 120 if a left hand is recognized. This means that if the combination of the first gesture and the second gesture is made by the right hand, then the first menu will be presented on the display 120 . However, if the combination of the first gesture and the second gesture is made by the left hand, then a second menu will be presented on the display 120 .

舉例來說,第二選單是如圖4B中所繪示的主選單io2。圖5A和圖5B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖5A,在第三時間點在第一影像中辨識到右手的手掌向上手勢,其定義為第一手勢。將在顯示器120上呈現具有手掌向上手勢的虛擬右手RH和虛擬球io3(即,第一互動物件)。參考圖5B,在第四時間點在第二影像中辨識到左手的握拳手勢,其定義為第二手勢。將在顯示器120上呈現具有握拳手勢的虛擬右手RH和快速設定選單io4(即,第二互動物件或第一選單)。快速設定選單io4包含多個圖示,例如用於打開/關閉相機、在虛擬手上進行特定運動以及消息傳送的圖示。For example, the second menu is the main menu io2 as shown in FIG. 4B . 5A and 5B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 5A , a palm-up gesture of the right hand is recognized in the first image at a third time point, which is defined as a first gesture. A virtual right hand RH with a palm-up gesture and a virtual ball io3 (ie, the first interactive object) will be presented on the display 120 . Referring to FIG. 5B , at the fourth time point, the fist gesture of the left hand is recognized in the second image, which is defined as the second gesture. A virtual right hand RH with a fist gesture and a quick setting menu io4 (ie, the second interactive object or the first menu) will be presented on the display 120 . The quick settings menu io4 contains icons such as those for turning the camera on/off, performing certain movements on the virtual hand, and messaging.

在一個實施例中,如果偵測到第二手勢,那麼處理器150可進一步在顯示器120上隱藏第一互動物件。這意味著不需要另外指示後續手勢,且第一互動物件將不可見。因此,僅在顯示器120上呈現第二互動物件。以圖5A和圖5B作為範例,在辨識到握拳手勢之後,隱藏虛擬球io3。In one embodiment, the processor 150 may further hide the first interactive object on the display 120 if the second gesture is detected. This means that subsequent gestures do not need to be additionally indicated, and the first interactive object will not be visible. Therefore, only the second interactive object is presented on the display 120 . Taking FIG. 5A and FIG. 5B as an example, after the fist gesture is recognized, the virtual ball io3 is hidden.

在另一實施例中,如果已在顯示器120上呈現第二互動物件且確認第一手勢和第二手勢的辨識結果(即,由使用者做出第一手勢與第二手勢的組合),那麼處理器150可在顯示器120上隱藏第一互動物件和第二互動物件。因此,可透過手勢來關閉選單。In another embodiment, if the second interactive object has been presented on the display 120 and the recognition results of the first gesture and the second gesture are confirmed (ie, the combination of the first gesture and the second gesture is made by the user) , the processor 150 can hide the first interactive object and the second interactive object on the display 120 . Therefore, the menu can be closed by a gesture.

舉例來說,圖6A和圖6B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。參考圖6A,已在顯示器120上呈現快速設定選單io4。由於右手RH的手掌向上手勢而呈現虛擬球io3。參考圖6B,由於右手RH的握拳手勢而隱藏虛擬球io3和快速設定選單io4兩者。For example, FIGS. 6A and 6B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. Referring to FIG. 6A , a quick settings menu io4 has been presented on the display 120 . The virtual ball io3 is presented due to the palm-up gesture of the right hand RH. Referring to FIG. 6B , both the virtual ball io3 and the quick setting menu io4 are hidden due to the fist gesture of the right hand RH.

應注意,可基於實際要求來修改圖4A到圖6B中的第一互動物件、第二互動物件、第一手勢以及第二手勢,且實施例不限於此。It should be noted that the first interactive object, the second interactive object, the first gesture, and the second gesture in FIGS. 4A to 6B may be modified based on actual requirements, and the embodiment is not limited thereto.

綜上所述,在本發明實施例的XR中透過手勢的操作方法和頭戴式顯示器系統中,在兩張影像中辨識手勢組合,且手勢組合是用於在顯示器中呈現第二互動物件。此外,辨識第一手勢之後,可顯示第一互動物件,以進一步提示做出第二手勢。藉此,可提供直覺的手勢控制。To sum up, in the operation method by gestures in XR and the head-mounted display system of the embodiments of the present invention, the gesture combination is recognized in the two images, and the gesture combination is used to present the second interactive object on the display. In addition, after the first gesture is recognized, the first interactive object may be displayed to further prompt to make the second gesture. Thereby, intuitive gesture control can be provided.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.

100:頭戴式顯示器系統 110:記憶體 120:顯示器 130:影像感測器 150:處理器 io1、io3:虛擬球 io2:主選單 io4:快速設定選單 J:所辨識的手關節 LH:虛擬左手 OM:影像 RH:虛擬右手 S210、S230、S250、S270、S301、S302:步驟 100: Head Mounted Display Systems 110: Memory 120: Display 130: Image sensor 150: Processor io1, io3: virtual ball io2: main menu io4: Quick Setup Menu J: Identified hand joints LH: virtual left hand OM: Video RH: virtual right hand S210, S230, S250, S270, S301, S302: Steps

圖1是示出依據本發明的示範性實施例中的一者的頭戴式顯示器系統的方塊圖。 圖2是示出依據本發明的示範性實施例中的一者的透過延伸實境(XR)中的手勢的操作方法的流程圖。 圖3是示出依據本發明的示範性實施例中的一者的手勢分類器的預測的示意圖。 圖4A和圖4B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 圖5A和圖5B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 圖6A和圖6B是示出依據本發明的示範性實施例中的一者的透過手勢觸發互動物件的示意圖。 1 is a block diagram illustrating a head mounted display system in accordance with one of the exemplary embodiments of the present invention. 2 is a flowchart illustrating a method of operation through gestures in Extended Reality (XR) in accordance with one of the exemplary embodiments of the present invention. 3 is a schematic diagram illustrating predictions of a gesture classifier in accordance with one of the exemplary embodiments of the present invention. 4A and 4B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. 5A and 5B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention. 6A and 6B are schematic diagrams illustrating triggering of interactive objects through gestures in accordance with one of the exemplary embodiments of the present invention.

S210~S270:步驟 S210~S270: Steps

Claims (16)

一種延伸實境(extended reality;XR)中透過手勢的操作方法,包括: 辨識一第一影像中的一第一手勢,其中所述第一手勢對應於一使用者的手; 反應於所述第一手勢的一辨識結果,而呈現一虛擬手和位於一互動區域上的一第一互動物件,其中所述虛擬手做出所述第一手勢; 辨識一第二影像中的一第二手勢,其中所述第二手勢對應於所述使用者的手且與所述第一手勢不同,且所述第二手勢與所述互動區域中的所述第一互動物件互動; 反應於所述第二手勢的辨識結果,而呈現所述虛擬手和第二互動物件,其中所述虛擬手做出所述第二手勢。 An operation method through gestures in extended reality (XR), including: identifying a first gesture in a first image, wherein the first gesture corresponds to a user's hand; In response to a recognition result of the first gesture, a virtual hand and a first interactive object located on an interactive area are presented, wherein the virtual hand makes the first gesture; Recognizing a second gesture in a second image, wherein the second gesture corresponds to the user's hand and is different from the first gesture, and the second gesture is the same as in the interaction area The first interactive object interacts with; In response to the recognition result of the second gesture, the virtual hand and a second interactive object are presented, wherein the virtual hand makes the second gesture. 如請求項1所述的延伸實境中透過手勢的操作方法,其中辨識所述第一手勢或所述第二手勢的步驟包括: 從所述第一影像或所述第二影像辨識所述使用者的手的關節;以及 透過一手勢分類器基於所述第一影像或所述第二影像以及所辨識的所述使用者的手的關節來預測所述使用者的手的手勢,其中透過機器學習演算法來訓練所述手勢分類器。 The method for operating through gestures in extended reality according to claim 1, wherein the step of recognizing the first gesture or the second gesture comprises: Identify the joints of the user's hand from the first image or the second image; and Predicting the gesture of the user's hand based on the first image or the second image and the identified joints of the user's hand by a gesture classifier, wherein the machine learning algorithm is used to train the gesture Gesture classifier. 如請求項2所述的延伸實境中透過手勢的操作方法,其中預測所述使用者的手的所述手勢的步驟包括: 僅基於所述第一影像或所述第二影像且在沒有所述所辨識的所述使用者的手的關節的情況下預測所述手勢;以及 基於所述所辨識的所述使用者的手的關節來確認所預測的手勢。 The method for operating through gestures in extended reality according to claim 2, wherein the step of predicting the gesture of the user's hand comprises: predicting the gesture based only on the first image or the second image and without the identified joints of the user's hand; and The predicted gesture is confirmed based on the identified joints of the user's hand. 如請求項1所述的延伸實境中透過手勢的操作方法,進一步包括: 反應於所述第二手勢的所述辨識結果,而隱藏所述第一互動物件。 The operation method through gestures in the extended reality as described in claim 1, further comprising: The first interactive object is hidden in response to the recognition result of the second gesture. 如請求項1所述的延伸實境中透過手勢的操作方法,進一步包括: 反應於所述第二手勢的所述辨識結果,而隱藏所述第一互動物件和所述第二互動物件。 The operation method through gestures in the extended reality as described in claim 1, further comprising: The first interactive object and the second interactive object are hidden in response to the recognition result of the second gesture. 如請求項1所述的延伸實境中透過手勢的操作方法,其中辨識所述第一手勢或所述第二手勢的步驟包括: 辨識所述使用者的右手和左手中的一者,且呈現所述第二互動物件的步驟包括: 反應於辨識所述右手而呈現一第一選單;以及 反應於辨識所述左手而呈現一第二選單,其中所述第二選單與所述第一選單不同,且所述第二互動物件包括所述第一選單和所述第二選單。 The method for operating through gestures in extended reality according to claim 1, wherein the step of recognizing the first gesture or the second gesture comprises: Identifying one of the user's right hand and left hand and presenting the second interactive object includes: presenting a first menu in response to identifying the right hand; and A second menu is presented in response to recognizing the left hand, wherein the second menu is different from the first menu, and the second interactive object includes the first menu and the second menu. 如請求項6所述的延伸實境中透過手勢的操作方法,其中所述第一選單對應於快速設定選單,且所述第二選單對應於主選單。The operation method through gestures in extended reality according to claim 6, wherein the first menu corresponds to a quick setting menu, and the second menu corresponds to a main menu. 如請求項1所述的延伸實境中透過手勢的操作方法,其中所述第一手勢是手掌向上手勢,且所述第二手勢是握拳手勢。The method for operating through gestures in extended reality according to claim 1, wherein the first gesture is a palm-up gesture, and the second gesture is a fist-clenching gesture. 一種頭戴式顯示器系統,包括: 一影像感測器,擷取影像; 一顯示器;以及 一處理器,耦接所述影像感測器和所述顯示器,且配置成用於: 辨識由所述影像感測器擷取的一第一影像中的一第一手勢,其中所述第一手勢對應於一使用者的手; 反應於所述第一手勢的辨識結果,而在所述顯示器上呈現一虛擬手和位於一互動區域上的一第一互動物件,其中所述虛擬手做出所述第一手勢; 辨識由所述影像感測器擷取的一第二影像中的一第二手勢,其中所述第二手勢對應於所述使用者的手且與所述第一手勢不同,且所述第二手勢與所述互動區域中的所述第一互動物件互動; 反應於所述第二手勢的辨識結果,而在所述顯示器上呈現所述虛擬手和第二互動物件,其中所述虛擬手做出所述第二手勢。 A head-mounted display system, comprising: an image sensor for capturing images; a display; and a processor, coupled to the image sensor and the display, and configured to: identifying a first gesture in a first image captured by the image sensor, wherein the first gesture corresponds to a user's hand; In response to the recognition result of the first gesture, a virtual hand and a first interactive object located on an interactive area are presented on the display, wherein the virtual hand makes the first gesture; Recognizing a second gesture in a second image captured by the image sensor, wherein the second gesture corresponds to the user's hand and is different from the first gesture, and the The second gesture interacts with the first interactive object in the interactive area; In response to the recognition result of the second gesture, the virtual hand and a second interactive object are presented on the display, wherein the virtual hand makes the second gesture. 如請求項9所述的頭戴式顯示器系統,其中所述處理器進一步配置成用於: 從所述第一影像或所述第二影像辨識所述使用者的手的關節;以及 透過一手勢分類器基於所述第一影像或所述第二影像以及所辨識的所述使用者的手的關節來預測所述使用者的手的手勢,其中透過機器學習演算法來訓練所述手勢分類器。 The head mounted display system of claim 9, wherein the processor is further configured to: Identify the joints of the user's hand from the first image or the second image; and Predicting the gesture of the user's hand based on the first image or the second image and the identified joints of the user's hand by a gesture classifier, wherein the machine learning algorithm is used to train the gesture Gesture classifier. 如請求項10所述的頭戴式顯示器系統,其中所述處理器進一步配置成用於: 僅基於所述第一影像或所述第二影像且在沒有所述所辨識的所述使用者的手的關節的情況下預測所述手勢;以及 基於所述所辨識的所述使用者的手的關節來確認所預測的手勢。 The head mounted display system of claim 10, wherein the processor is further configured to: predicting the gesture based only on the first image or the second image and without the identified joints of the user's hand; and The predicted gesture is confirmed based on the identified joints of the user's hand. 如請求項9所述的頭戴式顯示器系統,其中所述處理器進一步配置成用於: 反應於所述第二手勢的所述辨識結果,而隱藏所述第一互動物件。 The head mounted display system of claim 9, wherein the processor is further configured to: The first interactive object is hidden in response to the recognition result of the second gesture. 如請求項9所述的頭戴式顯示器系統,其中所述處理器進一步配置成用於: 反應於所述第二手勢的所述辨識結果,而隱藏所述第一互動物件和所述第二互動物件。 The head mounted display system of claim 9, wherein the processor is further configured to: The first interactive object and the second interactive object are hidden in response to the recognition result of the second gesture. 如請求項9所述的頭戴式顯示器系統,其中所述處理器進一步配置成用於: 辨識所述使用者的右手和左手中的一者,且呈現所述第二互動物件包括: 反應於辨識所述右手而在所述顯示器上呈現一第一選單;以及 反應於辨識所述左手而在所述顯示器上呈現一第二選單,其中所述第二選單與所述第一選單不同,且所述第二互動物件包括所述第一選單和所述第二選單。 The head mounted display system of claim 9, wherein the processor is further configured to: Identifying one of the user's right and left hands and presenting the second interactive object includes: presenting a first menu on the display in response to recognizing the right hand; and presenting a second menu on the display in response to recognizing the left hand, wherein the second menu is different from the first menu, and the second interactive object includes the first menu and the second menu menu. 如請求項14所述的頭戴式顯示器系統,其中所述第一選單對應於快速設定選單,且所述第二選單對應於主選單。The head mounted display system of claim 14, wherein the first menu corresponds to a quick settings menu and the second menu corresponds to a main menu. 如請求項9所述的頭戴式顯示器系統,其中所述第一手勢是手掌向上手勢,且所述第二手勢是握拳手勢。The head mounted display system of claim 9, wherein the first gesture is a palm up gesture and the second gesture is a fist gesture.
TW109144823A 2020-11-25 2020-12-18 Operating method by gestures in extended reality and head-mounted display system TW202221474A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202017103965A 2020-11-25 2020-11-25
US17/103,965 2020-11-25

Publications (1)

Publication Number Publication Date
TW202221474A true TW202221474A (en) 2022-06-01

Family

ID=81668113

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109144823A TW202221474A (en) 2020-11-25 2020-12-18 Operating method by gestures in extended reality and head-mounted display system

Country Status (2)

Country Link
CN (1) CN114546103A (en)
TW (1) TW202221474A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311486A (en) * 2022-06-22 2023-12-29 京东方科技集团股份有限公司 Interaction method and device for light field display and light field display system

Also Published As

Publication number Publication date
CN114546103A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
US11587297B2 (en) Virtual content generation
JP7137804B2 (en) Method and system for gesture-based interaction
US11048333B2 (en) System and method for close-range movement tracking
KR101844390B1 (en) Systems and techniques for user interface control
JP2019535055A (en) Perform gesture-based operations
US20140240225A1 (en) Method for touchless control of a device
CN110622219B (en) Interactive augmented reality
US20220066569A1 (en) Object interaction method and system, and computer-readable medium
US10168790B2 (en) Method and device for enabling virtual reality interaction with gesture control
US11054896B1 (en) Displaying virtual interaction objects to a user on a reference plane
TW202221474A (en) Operating method by gestures in extended reality and head-mounted display system
CN113168221A (en) Information processing apparatus, information processing method, and program
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
JP2016099643A (en) Image processing device, image processing method, and image processing program
US11500453B2 (en) Information processing apparatus
JP2022092745A (en) Operation method using gesture in extended reality and head-mounted display system
EP4009143A1 (en) Operating method by gestures in extended reality and head-mounted display system
US11782548B1 (en) Speed adapted touch detection
TWI696092B (en) Head mounted display system capable of creating a virtual object in a virtual environment according to a real object in a real environment and assigning a predetermined interactive characteristic to the virtual object, related method and related computer readable storage medium
US11054941B2 (en) Information processing system, information processing method, and program for correcting operation direction and operation amount
US20230061557A1 (en) Electronic device and program
Vidal Jr et al. Extending Smartphone-Based Hand Gesture Recognition for Augmented Reality Applications with Two-Finger-Pinch and Thumb-Orientation Gestures
CN116166161A (en) Interaction method based on multi-level menu and related equipment
CN116129518A (en) Somatosensory operation method based on gesture recognition
Yang et al. Around-device finger input on commodity smartwatches with learning guidance through discoverability