CN116166161A - Interaction method based on multi-level menu and related equipment - Google Patents

Interaction method based on multi-level menu and related equipment Download PDF

Info

Publication number
CN116166161A
CN116166161A CN202310183890.3A CN202310183890A CN116166161A CN 116166161 A CN116166161 A CN 116166161A CN 202310183890 A CN202310183890 A CN 202310183890A CN 116166161 A CN116166161 A CN 116166161A
Authority
CN
China
Prior art keywords
gesture
submenu
response
level menu
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310183890.3A
Other languages
Chinese (zh)
Inventor
孙健航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310183890.3A priority Critical patent/CN116166161A/en
Publication of CN116166161A publication Critical patent/CN116166161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The application provides an interaction method based on a multi-level menu and related equipment, wherein the type of a first operation object is determined by identifying a first gesture of the first operation object aiming at a plurality of operation objects of the multi-level menu, when the first operation object is the identification of a first submenu, the first submenu is displayed, and when the first operation object is the identification of a first operation, the first operation is executed. The mode of completing interaction through gesture recognition can reduce the false touch rate of the existing interaction mode to a certain extent.

Description

Interaction method based on multi-level menu and related equipment
Technical Field
The application relates to the technical field of man-machine interaction, in particular to an interaction method based on a multi-level menu and related equipment.
Background
With the development of intelligent software and hardware technologies, extended Reality (XR) technology implemented by computer technology and wearable devices has emerged. XR technologies may further include Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR). VR is a technique that utilizes computer simulations to create virtual environments. AR is a technique that merges virtual information with the real world. MR is a mixture of both AR and VR.
During actual use, the user may interact with virtual objects presented in the VR/AR/MR scene. For example, when a multi-level menu is presented in a scene, a user makes a selection by clicking on a triggerable object in the multi-level menu. However, this way of interacting with the multi-level menu has a high false touch rate.
Disclosure of Invention
In view of the foregoing, it is an object of the present application to provide an interactive method and related device based on a multi-level menu, so as to solve or partially solve the foregoing problems.
In a first aspect of the present application, there is provided an interaction method based on a multi-hierarchy menu, the multi-hierarchy menu including a plurality of operation objects, the method comprising:
in response to identifying a first gesture to a first operand of the plurality of operands, determining a type of the first operand;
in response to determining that the first operation object is an identification of a first submenu, displaying the first submenu; or alternatively
And executing the first operation in response to determining that the first operation object is the identification of the first operation.
In a second aspect of the present application, there is provided an interaction device based on a multi-level menu, including:
an identification module configured to determine a type of a first operation object of the plurality of operation objects in response to identifying a first gesture of the first operation object;
An execution module configured to display the first submenu in response to determining that the first operation object is the first submenu; or alternatively
And executing the first operation in response to determining that the first operation object is the identification of the first operation.
In a third aspect of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the program.
In a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
In a fifth aspect of the present application, there is provided a computer program product for implementing the method according to the first aspect when the computer program is executed by a processor.
As can be seen from the above description, the interactive method and the related device based on the multi-level menu provided by the present application determine the type of the first operation object by identifying the first gesture of the first operation object for the plurality of operation objects of the multi-level menu, when the first operation object is the identification of the first submenu, display the first submenu, and when the first operation object is the identification of the first operation, execute the first operation. The mode of completing interaction through gesture recognition can reduce the false touch rate of the existing interaction mode to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 shows a schematic diagram of an exemplary augmented reality system 100 according to an embodiment of the application.
Fig. 2 shows a schematic diagram of an exemplary multi-level menu based interactive screen 200.
Fig. 3A shows a schematic diagram of an exemplary screen 300 according to an embodiment of the present application.
Fig. 3B shows a schematic diagram of yet another exemplary screen 300 according to an embodiment of the present application.
FIG. 3C illustrates a schematic diagram of an exemplary gesture according to embodiments of the present application.
Fig. 3D shows a schematic diagram of yet another exemplary screen 300 according to an embodiment of the present application.
Fig. 3E shows a schematic diagram of another exemplary screen 300 according to an embodiment of the present application.
FIG. 3F illustrates a schematic diagram of another exemplary gesture according to embodiments of the present application.
Fig. 3G shows a schematic diagram of yet another exemplary screen 300 according to an embodiment of the present application.
Fig. 3H shows a schematic diagram of yet another exemplary screen 300 according to an embodiment of the present application.
Fig. 3I shows a schematic diagram of yet another exemplary screen 300 according to an embodiment of the present application.
Fig. 4A shows a schematic diagram of an exemplary multi-level menu based interactive screen 400.
Fig. 4B shows a schematic diagram of another exemplary multi-level menu based interactive screen 400.
Fig. 5A shows a schematic diagram of an exemplary screen 500 according to an embodiment of the present application.
Fig. 5B shows a schematic diagram of another exemplary screen 500 according to an embodiment of the present application.
Fig. 5C shows a schematic diagram of an exemplary screen 510 according to an embodiment of the present application.
Fig. 5D shows a schematic diagram of another exemplary screen 510 according to an embodiment of the present application.
Fig. 5E shows a schematic diagram of an exemplary screen according to an embodiment of the present application.
Fig. 6A shows a schematic diagram of an exemplary screen 600 according to an embodiment of the present application.
Fig. 6B shows a schematic diagram of another exemplary screen 600 according to an embodiment of the present application.
Fig. 6C shows a schematic diagram of yet another exemplary screen 600 according to an embodiment of the present application.
Fig. 6D shows a schematic diagram of yet another exemplary screen 600 according to an embodiment of the present application.
FIG. 7 illustrates a flow chart of an exemplary multi-level menu based interaction method 700 according to an embodiment of the present application.
FIG. 8 illustrates a schematic diagram of an exemplary multi-level menu based interaction device, according to an embodiment of the present application.
Fig. 9 shows a more specific hardware structure of the electronic device according to the present embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described above, a user may interact with virtual objects presented in a VR/AR/MR scene. For example, when a multi-level menu is presented in a scene, a user makes a selection by clicking on a triggerable object in the multi-level menu. However, there are multiple operation objects in the multi-level menu, and the multiple operation objects are displayed in a side-by-side and/or side-by-side manner in the multi-level menu, in which case the triggerable object is selected by the interactive manner of clicking, and other triggerable objects adjacent to the target triggerable object are easy to click, so that the interactive manner with the multi-level menu can generate a false touch rate.
Fig. 1 shows a schematic diagram of an exemplary augmented reality system 100 according to an embodiment of the application.
As shown in fig. 1, the system 100 may include a head-mounted wearable device (e.g., VR glasses) 104, a donning glove 106, an operating handle 108. In some scenarios, a camera/webcam 110 for taking pictures of the operator (user) 102 may also be provided. In some embodiments, when the aforementioned device does not have processing functionality, the system 100 may also include an external control device 112 for providing processing functionality. The control device 112 may be, for example, a computer device such as a mobile phone or a computer. In some embodiments, when any of the foregoing devices is used as a control device or a main control device, information interaction may be implemented with other devices in the system 100 through a wired or wireless communication manner.
In some embodiments, as shown in FIG. 1, the system 100 may also be in communication with a server 114 and may obtain data, e.g., pictures, audio, video, etc., from the server 114. In some embodiments, as shown in FIG. 1, the server 114 may retrieve desired data, e.g., pictures, audio, video, etc., from a database server 116 for storing the data.
In the system 100, an operator 102 may utilize a head-mounted wearable device (e.g., VR glasses) 104, a donning glove 106, an operating handle 108 to enable interaction with the augmented reality system 100. In some embodiments, the wearable headset 104, the wearing glove 106, and the operating handle 108 may each be provided with an acquisition unit for acquiring information, and the system 100 uses the information acquired by the acquisition unit to identify the gesture, etc. of the operator 102, so as to implement interaction between the operator 102 and the augmented reality system 100 based on the identified gesture, gesture. There may be a plurality of types of acquisition units. For example, the head-mounted wearable device 104 may be provided with a camera or a charge-coupled device (CCD) image sensor for acquiring a human eye picture, a speed sensor for acquiring speed information or acceleration information of the head-mounted wearable device 104, an acceleration sensor, an angular velocity sensor (e.g., a gyroscope), an electrode for acquiring brain wave information, a neuromuscular sensor (neuromuscular sensors) for acquiring neuromuscular response information, a temperature sensor for acquiring body surface temperature, and the like. For another example, the donning glove 106 may be provided with a speed sensor for acquiring speed information or acceleration information of the donning glove 106, an acceleration sensor, an angular velocity sensor (for example, a gyroscope), a neuromuscular sensor (neuromuscular sensors) for acquiring neuromuscular response information, a temperature sensor for acquiring body surface temperature, and the like. It should be noted that the aforementioned acquisition unit may be disposed on the operation handle 108 of fig. 1 in addition to the wearable device 104 and the wearing glove 106, or may be disposed on a body part of the interactive operator 102 directly by attaching instead of depending on a hardware device, so as to acquire relevant information of the body part, for example, speed or acceleration or angular velocity information, or biological information (for example, an image of a human eye (including an image of a pupil), neuromuscular response information, brain wave information, body surface temperature, etc.) acquired by other sensors or acquisition units. In some embodiments, the system 100 may also utilize the captured images of the camera 110 to identify gestures of the bare hand of the operator 102, thereby completing interactions with the operator 102 based on the identified gestures.
The system 100 recognizes a gesture of the bare hand of the operator 102 based on the acquired picture by the acquisition unit or the camera/webcam 110. In some embodiments, the system 100 first performs skin color detection on the hand-shaped image in the frame acquired by the acquisition unit or the camera 110, segments the hand-shaped image based on the result of skin color detection to separate the target gesture of the operator 102 from the image background in the hand-shaped image, then extracts the features of the target gesture (e.g., the number of finger tips, finger nodes, feature vectors of the gesture, etc.), and performs gesture recognition based on the extracted features of the target gesture. For example, a gesture template may be utilized to match the characteristic parameters of the gesture to be recognized with the characteristic parameters of the gesture template to recognize the gesture. For another example, the extracted feature vectors may be classified to recognize gestures based on statistical analysis techniques. For another example, a neural network learning model may be utilized to identify gestures based on a large number of extracted gesture features.
An augmented reality (XR) system 100 may allow an operator 102 to interact with a digital world in an analog scene. The simulated scene may be a virtual scene or a virtual-real combined scene. In some cases, the operator 102 may be required to interact with some virtual objects in a simulated scene. The virtual object may be a multi-level menu.
Fig. 2 shows a schematic diagram of an exemplary multi-level menu based interactive screen 200.
The screen 200 may be a screen that the operator 102 views through the head-mounted wearable device 104, as shown in fig. 2, in which a digital world under an analog scene is shown in the screen 200, wherein when an input request is received, a multi-level menu 202 may be displayed in the screen 200.
In some embodiments, a plurality of triggerable objects may be displayed in the multi-level menu 202, for example, as shown in fig. 2, a plurality of operation objects 2022 in the multi-level menu 202, and these operation objects 2022 may generate corresponding submenus when triggered, or perform corresponding operations (e.g., change different simulated scenes), and thus, these operation objects 2022 may be regarded as the triggerable objects.
After receiving the input request, in the related art, the operator 102 generally interacts with the multi-level menu 202 through a click operation.
As shown in fig. 2, in some embodiments, the multi-level menu 202 may be formed closer to the operator 102 (may be referred to as the near field) or further from the operator 102 (may be referred to as the far field). The far field will be visually small relative to the screen 200 for the multi-level menu 202. In the related art, interaction with the multi-level menu 202 may be implemented according to whether the multi-level menu 202 is in a far-field or near-field position, and different input media (e.g., an operation handle or a hand) used by the operator 102, thereby generating a corresponding sub-menu or performing a corresponding operation in the simulation scene.
As shown in fig. 2, in the related art, when the multi-level menu 202 is in a near position (near field), a pointing point 206 is formed at the front end of an image (e.g., a hand-shaped image 204) formed in a simulation scene at a different input medium used by the operator 102. The hand shape image 204 is used to reflect where the hand of the operator 102 is located in the simulated scene to enable the operator 102 to operate based on the hand shape image 204. The pointing point 206 formed at the front of the hand image 204 serves as a point at which the hand image 204 contacts a particular object in the multi-level menu 202, such that the operator 102 determines to trigger a triggerable object in the multi-level menu 202 based on the location of the pointing point 206, and in turn triggers a triggerable object by a clicking operation.
Further, in some related art, if an operator uses an operation handle as an input medium, a pointing point is formed at the front end of an image formed in a simulation scene by the operation handle. Similar to the related art described above, the operator can determine to trigger a certain triggerable object in the multi-level menu according to the position of the pointing point, and then trigger a certain triggerable object through a click operation. In other related art, if the multi-level menu is at a far location (far field), rays directed to the multi-level menu are formed at the front end of an image formed in the simulated scene by a different input medium used by the operator. The operator determines the position of the triggerable object through rays and triggers the triggerable object through a click operation.
It can be seen that in the related art, there is instability in either a manner of forming a pointing point at the front end of a hand-shaped image or a handle image or a manner of forming a ray pointing to a multi-level menu at the front end of a hand-shaped image or a handle image, which is liable to erroneously touch other triggerable objects in the multi-level menu when an operator selects the triggerable objects using a click operation. When an operator interacts with a triggerable object through a pointing point, instability of the false touch may be caused by an area of the pointing point, as shown in fig. 2, the operation object 2024 is a target triggerable object of the operator 102, however, since the pointing point 206 has a certain area, and the operation objects are closely arranged in the multi-level menu 202, when the pointing point 206 approaches the operation object 2024, a part of the area of the pointing point 206 falls within a range of the operation object 2026 next to the operation object 2024. At this time, when the operator 102 uses the click operation, the operator touches the operation object 2026 by mistake. When an operator interacts with a triggerable object through rays, instability of false touch can be caused by far-field interaction, the distance between a hand-shaped image or an operation handle image and a multi-level menu is too far, the hands of the operator are easy to shake when the operator manipulates the rays, and the front end of the rays cannot easily and accurately determine the position of the target triggerable object.
In view of this, the embodiments of the present application provide an interaction method based on a multi-level menu and related devices, where the type of a first operation object is determined by identifying a first gesture of the first operation object for a plurality of operation objects of the multi-level menu, when the first operation object is an identification of a first submenu, the first submenu is displayed, and when the first operation object is an identification of a first operation, the first operation is performed. The mode of completing interaction through gesture recognition can reduce the false touch rate of the existing interaction mode to a certain extent.
Fig. 3A shows a schematic diagram of an exemplary screen 300 according to an embodiment of the present application.
The screen 300 may be a screen that the operator 102 views through the head-mounted wearable device 104, as shown in fig. 3A, in which a digital world under an analog scene is presented in the screen 300, wherein when an input request is received, a multi-level menu 304 may be displayed in the screen 300.
In some embodiments, a plurality of triggerable objects may be displayed in the multi-level menu 304, e.g., as shown in fig. 3A, a plurality of operation objects (e.g., first operation object 3042) in the multi-level menu 304, which may generate corresponding submenus when triggered, or perform corresponding operations (e.g., change different simulated scenes), and thus, may be considered as the triggerable objects.
In some embodiments, operator 102 may implement gesture input through a bare hand, and system 100 recognizes the input gesture through the captured hand shape image, and interacts with multi-level menu 304 based on the recognized gesture.
Upon identifying a first gesture made by the operator 102 for a triggerable object in the multi-level menu 304, the system 100 first determines the type of triggerable object to perform the corresponding function of the different triggerable object. As shown in fig. 3A, in some embodiments, the first gesture may include a pinch gesture 302, and the operator 102 interacts with a multi-level menu 304 based on the pinch gesture 302. And determining the type of the selected target operation object by adopting the pinch gesture, and conforming to the use habit and the operation habit of an operator. In some embodiments, the system 100 identifies that the first finger 3022 and the second finger 3024 of the gesture in the hand shape image are in contact based on the acquired hand shape image, and may then determine that the gesture in the hand shape image is a pinch gesture 302. The fingertips of the first finger 3022 and the second finger 3024 of the pinch gesture 302 may be used to cause the operator 102 to determine whether to trigger a certain triggerable object in the multi-level menu 304 based on the location of the fingertips.
As shown in fig. 3A, in some embodiments, the system 100 recognizes that the distance between the pinch gesture 302 made by the operator 102 and the first operational object 3042 is less than or equal to a third distance (e.g., 1-2 cm), and may determine that the first operational object 3042 is a triggerable object to be triggered by the operator 102. At this time, the system 100 starts determining the type of the first operation object 3042. It will be appreciated that the third distance is set because if the distance between the pinch gesture 302 and the first operation object 3042 is too far, on the one hand, the system 100 cannot respond to the interaction between the pinch gesture 302 and the first operation object 3042. On the other hand, the system 100 may recognize interactions of the pinch gesture 302 with other triggerable objects adjacent to the first operational object 3042. Therefore, it is necessary to set the distance between the pinch gesture 302 and the first operation object 3042 to be less than or equal to the third distance in order to accurately respond to the interaction between the pinch gesture 302 and the first operation object 3042 while reducing the false touch rate. Further, in determining the distance between the pinch gesture 302 and the first operation object 3042, the setting of the reference point of the pinch gesture 302 may be adjusted as needed. In some embodiments, the distance between the pinch gesture 302 and the first operation object 3042 may be a distance between a center of the pinch gesture 302 and the first operation object 3042 determined with the base point as a base point. In still other embodiments, the fingertips of the first finger 3022 and the second finger 3024 of the pinch gesture 302 may be used as a base point, and the distance between the base point and the first operation object 3042 may be determined.
Fig. 3B shows a schematic diagram of an exemplary screen 300 according to an embodiment of the present application.
Pinch gesture 302 is a hand-shaped image generated in screen 300 based on a corresponding gesture made by operator 102 in the real physical world. In some embodiments, the operator 102 may first make a third gesture in the real physical world, as shown in fig. 3B, the third gesture including a splay gesture 312. The operator 102 approaches a triggerable object in the multi-level menu based on the hand-shaped image generated in the screen 300 by the expand gesture 312. The tips of fingers 3122 and 3124 in the open gesture 312 may be used to cause the operator 102 to determine whether a triggerable object in the multi-level menu 304 is a target triggerable object (e.g., the first operational object 3042) based on where the tip is located.
As shown in fig. 3C, in some embodiments, system 100 determines that the gesture in the hand image is a pinch gesture 312, i.e., a start gesture that makes a pinch gesture, based on the acquired hand image that the opening and closing angles 3126 of finger 3122 and 3124 of the gesture in the hand image are within a preset angular range (e.g., opening and closing angles 3126 may be set to 0 to 30 degrees). The preset angle range can be adjusted adaptively based on the operation habit of the operator, so that the operation experience of the operator is improved. It should be noted that, as an alternative embodiment, the opening and closing angle 3126 may be an acute angle formed by connecting the tiger mouth to the fingertips of the two fingers respectively with the tiger mouth of the opening gesture 312 as a center of a circle. The operator 102 keeps the third gesture proximate to the first operational object 3042 in the multi-level menu 304 until the system 100 recognizes that the distance between the third gesture and the first operational object 3042 is less than or equal to the third distance, and the operator 102 makes a pinch gesture 302 with respect to the first operational object 3042 based on the open gesture 312, as shown in fig. 3C.
Based on the implementation of the pinch gesture described above, the open gesture 312 may be set as a gesture for exhaling (waking up) a multi-level menu that interfaces with the pinch gesture 302. In this way, after the multi-level menu is exhaled, the operator 102 can continue to make the pinch gesture 302 based on the expand gesture 312, saving energy that would otherwise be required to make other exhale menu gestures. In some embodiments, system 100 recognizes a splay gesture 312 made by operator 102 in a simulated scene, and recognizes that the splay gesture 312 remains stationary at any position for a time greater than or equal to a first time (e.g., remains stationary for 2 seconds), as shown in fig. 3B, at position 1 where the tips of fingers 3122 and 3124 of splay gesture 312 are located, multi-level menu 304 is displayed. It will be appreciated that if the hold dwell time of the expand gesture 312 is only set to determine whether to exhale the multi-level menu 304 is equal to the first time, then after the multi-level menu 304 is exhaled, the system 100 recognizes that the expand gesture 312 remains in the same location for longer than the first time, and determines that the condition for displaying the multi-level menu 304 is not satisfied, and the multi-level menu 304 is closed. Thus, the hold dwell time of the open gesture 312 needs to be set to be greater than or equal to the first time to avoid the problem of the multi-level menu 304 being closed due to too long a gesture dwell time.
In still other embodiments, the operator 102 may also exhale the multi-level menu by touching the head-mounted wearable device 104. As described above, an acquisition unit for acquiring information may be provided on the head wearable device 104, and the acquisition unit may be a temperature sensor for acquiring a body surface temperature of the operator 102 (e.g., a temperature of a finger of the operator 102). The system 100 determines, through the temperature of the operator 102 finger captured by the temperature sensor, whether the operator 102 finger is within a touch point range provided on the head-mounted wearable device 104 for exhaling the multi-level menu. If the system 100 recognizes that the finger of the operator 102 has been within the touch point for greater than or equal to a certain amount of time, a multi-level menu is displayed at any location in the simulated scene.
Fig. 3D shows a schematic diagram of an exemplary screen 300 according to an embodiment of the present application.
The type of triggerable object may include a submenu or a corresponding operation of the triggerable object. In some embodiments, if the operator 102 selects the triggerable object to trigger as a submenu of the multi-level menu, the operator 102 maintains pinch gesture movements in the real physical world to control triggerable object movements in the simulated scene. As shown in fig. 3D, after the system 100 determines that the type of the first operation object 3042 is the identifier 3044 of the first submenu, if it is recognized that the operator 102 keeps the pinch gesture 302 moving in the direction 1 in the simulated scene, the system 100 controls the identifier 3044 of the first submenu to move in the simulated scene following the pinch gesture 302. In some embodiments, the image of the identification 3044 of the first submenu in the simulated scene may be different from the image transparency of the first submenu, may be different from the size of the first submenu, may be presented in a single color module image in the simulated scene, and so forth.
In some embodiments, during the movement of the first submenu's identification 3044 along with the pinch gesture 302, the operator 102 may determine the location of the pinch gesture 302 based on the hand shape image in the simulated scene, and then determine the location of the first submenu based on the locations of the fingertips of the first finger 3022 and the second finger 3024 of the pinch gesture 302, or based on the location of the first submenu's identification 3044. As shown in fig. 3E, if the operator 102 determines that the first submenu 306 is displayed at position 2, the operator 102 makes a second gesture based on the pinch gesture 302 in the real physical world after determining that the identification 3044 of the first submenu is located at position 2 from the image in the simulated scene, as shown in fig. 3F. The second gesture includes a release gesture 322, and based on the pinch gesture 302, the release gesture 322 is set as a determination condition for displaying the first submenu, so that the use habit and the operation habit of an operator are met, and the operation experience of the operator is improved. After the system 100 recognizes the unclamp gesture 322 made by the operator 102, an image of the first submenu 306 is displayed at position 2. As shown in fig. 3F, in some embodiments, the system 100 identifies that the opening and closing angles 3226 of the first finger 3222 and the second finger 3224 of the gesture in the hand image are within a preset angle range based on the acquired hand image (e.g., the angle range of the opening and closing angle 3226 may be set to be the same as the angle range of the opening and closing angle 3126), and determines that the gesture in the hand image is the unclamping gesture 322.
To avoid the problem of failure to successfully display the first submenu 306 due to too close a distance between the first submenu's identification 3044 and the multi-level menu 304, as depicted in FIG. 3G, in some embodiments, it may be desirable to set the distance 31 that the first submenu's identification 3044 follows the pinch gesture 302, or the distance between the first submenu's identification 3044 and the multi-level menu 304 exceeds a first distance (e.g., 1-2 cm), after recognizing the pinch gesture 322, the system 100 may display the first submenu 306. In order for the operator 102 to determine from the image in the simulated scene that the distance between the identifier 3044 of the first submenu and the multi-level menu 304 has exceeded the first distance during the movement of the identifier 3044 of the first submenu controlled by the operator 102 using the hand-shaped image generated in the simulated scene, in some embodiments, the identifier 3044 of the first submenu and the image of the multi-level menu 304 may be set to be displayed in the same color or in the same gray scale when the distance between the identifier 3044 of the first submenu and the multi-level menu 304 is also within the first distance; when the distance between the identification 3044 of the first submenu and the multi-level menu 304 exceeds the first distance, the identification 3044 of the first submenu and the image of the multi-level menu 304 in the simulated scene are set to resume the original color. This allows the operator 102 to clearly determine from the images in the simulated scene where the first submenu can be displayed by making the unclamp gesture 322 without causing the first submenu 306 to be successfully displayed due to the too close distance between the first submenu's logo 3044 and the multi-level menu 304.
In some embodiments, in selecting and moving the identification 3044 of the first submenu by the operator 102 based on the image of the pinch gesture 302 generated in the simulated scene, the identification 3044 of the first submenu may be flipped, dragged, or the like by maintaining the state of the pinch gesture 302. As shown in fig. 3H, the system 100 may control the identifier 3044 of the first submenu to flip from the position 3 to the position 4 following the pinch gesture 302, and then the operator 102 may continue to rotate the identifier 3044 of the first submenu in the direction 2 based on the image of the pinch gesture 302 generated in the simulated scene at the position 4 according to the view angle of the facing screen 300, further, as shown in fig. 3I, so that the display surface of the first submenu 306 is displayed towards the operator 102, so that the operator 102 may conveniently view the content of the target operation object.
Fig. 4A shows a schematic diagram of an exemplary multi-level menu based interactive screen 400.
In the related art, after selecting and displaying the submenu using a click manner, the multi-level menu is not displayed any more. After the operator 102 clicks on the submenu 4026 in the multi-level menu 402 based on the pointing point 406 at the front of the hand image 404 in the simulated scene, as shown in fig. 4A, the submenu 4026 expands to cover other triggerable objects in the multi-level menu 402, as shown in fig. 4B. Thus, if the operator 102 does not find the function or other triggerable object in the submenu 4026 that is desired to be selected, it is necessary to click back to the previous level menu, and the multi-level menu 402 resumes the image as shown in fig. 4A in the simulated scene. This is detrimental to the operator 102 finding the target triggerable object, which wastes time for the operator 102 to find the target triggerable object, while constantly switching between the multi-level menu 402 and different operational objects increases the power consumption of the computer operation.
Fig. 5A shows a schematic diagram of an exemplary screen 500 according to an embodiment of the present application.
To solve the above-described problem, in some embodiments, the first submenu may be displayed on the basis of displaying the multi-level menu. In this way, when the operator 102 continues to perform gesture interaction on other operation objects of the multi-level menu, since the multi-level menu is still displayed in the simulation scene, the operator 102 can directly perform interaction gestures on other operation objects of the multi-level menu. As shown in fig. 5A, taking the multi-level menu 502 as an example, in the screen 500, a first submenu 504 corresponding to a first operation object 5022 of the multi-level menu 502 is already displayed in the simulation scene. Upon displaying the first submenu 504 and the other plurality of submenus, the operator 102 continues to make a pinch gesture 508 to the second operational object 5024 of the multi-level menu 502 based on the hand shape image generated in the simulated scene. The system 100 recognizes the pinch gesture 508 of the operator 102 with respect to the second operation object 5024 of the multi-level menu 502, and determines that the second operation object 5024 is the target operation object to be selected by the operator 102. Next, the system 100 determines the type of the second operation object 5024. If the system 100 determines that the second operation object 5024 is the identification of the second submenu, the second submenu 506 is displayed as shown in fig. 5B, in addition to the multi-level menu 502 and the first submenu 504.
Fig. 5C shows a schematic diagram of an exemplary screen 510 according to an embodiment of the present application.
As shown in fig. 5C, in some embodiments, taking the multi-level menu 512 as an example, in the screen 510, a first submenu 514 corresponding to the first operation object 5122 of the multi-level menu 512 is already displayed in the simulated scene. Upon displaying the first sub-menu 514 and the other plurality of sub-menus, the operator 102 continues to make a pinch gesture 518 with respect to a first sub-operation object 5142 of the plurality of operation objects in the first sub-menu 514 based on the hand-shaped image generated in the simulated scene. The system 100 recognizes the pinch gesture 518 of the operator 102 for the first sub-operation object 5142 of the first sub-menu 514 and determines that the first sub-operation object 5142 is the target operation object to be selected by the operator 102. Next, the system 100 determines the type of the first sub-operand 5142. If the system 100 determines that the first sub-operation object 5142 is the identification of the third sub-menu, then the third sub-menu 516 is displayed on the basis of displaying the multi-level menu 512 and the first sub-menu 514, as shown in fig. 5D.
After repeating the operations a plurality of times, a plurality of submenus are formed, which are displayed in a tree structure as shown in fig. 5E. When the plurality of submenus are displayed in the scene in the configuration shown in fig. 5E, the operator 102 can clearly and intuitively see the functions or next submenus included in each submenu. When the operator 102 finds the target operation object to be selected, the operator can directly interact with the target operation object through the hand-shaped image generated in the simulation scene, so that the problems of time consumption and high energy consumption caused by switching between different submenus in the related art are avoided. In addition, through the mode of presenting the submenus, a plurality of submenus cannot be overlapped, and the difficulty of image recognition of the submenus by the terminal equipment is reduced. It should be noted that, the display of the multiple menus in the tree structure according to the embodiment of the present application is only exemplary, and other display structures that can make the multiple menus not overlap or be blocked, or display structures with similar effects should fall within the protection scope of the embodiment of the present application.
In some embodiments, if the operator 102 selects the triggered triggerable object to be the corresponding first operation of a triggerable object in the multi-level menu, the operator 102 maintains pinch gesture movement in the real physical world to control the identified movement of the first operation in the simulated scene. Similar to the manner of controlling the movement of the identifier 3044 of the first submenu, as shown in fig. 3D, after the system 100 determines that the type of the first operation object 3042 is the identifier of the first operation, if it is recognized that the operator 102 keeps the pinch gesture 302 moving in the direction 1 in the simulated scene, the system 100 controls the identifier of the first operation to follow the pinch gesture 302 to move in the simulated scene. Likewise, in some embodiments, the image of the first operation identified in the simulated scene may be different from the image transparency of the multi-level menu 304, may be different from the size of the multi-level menu 304, may be presented in a single color module image in the simulated scene, and so forth.
In some embodiments, during the movement of the first operation following the pinch gesture 302, if the first operation is an operation that needs to be performed to determine a specific position, the operator 102 may determine, based on a hand shape image in the simulation scene, a position where the pinch gesture 302 is located, and then determine, based on positions where fingertips of the first finger 3022 and the second finger 3024 of the pinch gesture 302 are located, or based on a position where the first operation is located. If the first operation is an operation that can be performed at any location in the simulated scene, the operator 102 may make a second gesture at any location based on the pinch gesture 302 in the simulated scene to cause the system 100 to perform the first operation. Similar to the manner in which the first submenu 306 is displayed, in some embodiments, the second gesture includes a unclamp gesture 322. After the system 100 recognizes the unclamp gesture 322 made by the operator 102, a first operation is performed.
Likewise, to avoid the problem of the system 100 failing to successfully perform the first operation due to too close a distance between the identity of the first operation and the multi-level menu 304, in some embodiments, it may be desirable to set the distance that the identity of the first operation moves following the pinch gesture 302 to exceed a second distance (e.g., 1-2 cm) relative to the distance of the multi-level menu 304, the system 100 may perform the first operation after recognizing the unclamp gesture 322. In order for the operator 102 to determine from the image in the simulated scene that the distance between the first operational logo and the multi-level menu 304 has exceeded the second distance during movement of the first operational logo controlled by the operator 102 using the hand-shaped image generated in the simulated scene, in some embodiments, the first operational logo and the multi-level menu 304 may be set to be displayed in the same color or in the same gray scale as the image in the simulated scene when the distance between the first operational logo and the multi-level menu 304 is also within the second distance; when the distance between the identification of the first operation and the multi-level menu 304 exceeds the second distance, the identification of the first operation and the image of the multi-level menu 304 in the simulated scene are set to resume the original color. This allows the operator 102 to clearly determine from the images in the simulated scene when the unclamp gesture 322 may be made to cause the system 100 to perform the first operation without causing the system 100 to fail because the first operation is identified too closely to the multi-level menu 304.
In some embodiments, after the system 100 performs the first operation, other submenus or multi-level menus in the simulated scene shown in FIG. 5E are closed. According to the operation habit of an operator, after the first operation is executed, the first operation is considered as a target operation object of the operator, and the multi-level menu is automatically closed at the moment, so that the need of the operator to manually close the multi-level menu is avoided, and the operation of the operator is facilitated. In addition, by means of gesture recognition and judgment of conditions to select the triggerable object, whether the gesture accurately selects the triggerable object or not can be recognized, and the false touch rate generated in the mode that no judgment condition is used for clicking selection in the related technology is reduced.
Fig. 6A shows a schematic diagram of an exemplary screen 600 according to an embodiment of the present application.
When multiple submenus are displayed in the simulated scene, the operator 102 wants to close some of the submenus, or close all of the submenus and multilevel menus, in some embodiments, the operator 102 may make a fourth gesture in the real physical world. As shown in fig. 6A, the fourth gesture includes a palm slapping gesture 602, where the palm slapping gesture 602 conforms to the operation habit of the operator, and is convenient for the operator to operate. After the system 100 recognizes that the operator 102 makes a palm tap gesture 602 with respect to the submenu or multi-level menu that he wants to close, the corresponding submenu or multi-level menu is closed. The operator can selectively close single or multiple menus, so that the flexibility of operation is improved, and the use experience of the operator is improved.
Palm tap gesture 602 is not limited to being parallel to screen 600 as shown in fig. 6A, nor is palm tap gesture 602 perpendicular or parallel to a multi-level menu, as shown in fig. 6B, in some embodiments, when system 100 recognizes that palm tap gesture 602 is tilted at an angle with respect to multi-level menu 604, multi-level menu 604 may also be closed. Further, upon recognizing that a hand-shaped image in the simulated scene moves from first location 612 to second location 614, and that second location 614 is less than a third distance (e.g., 1-2 cm) from distance 616 of multi-level menu 604, system 100 may determine that the gesture in the hand-shaped image is palm slap gesture 602. It will be appreciated that the reference point used to determine movement of the palm slap gesture 602 from the first position 612 to the second position 614 is not limited to the palm root portion as shown in fig. 6B, and as an alternative embodiment, the center of the palm gesture may be used as the reference point, or a fingertip of the palm gesture may be used as the reference point, which is not limited in the embodiments of the present application. Similarly, the gesture reference point for determining the distance 616 may be the reference point for determining the distance 616 from another position of the palm gesture, which is not limited in the embodiment of the present application. In addition, the determination condition of the system 100 for recognizing the palm slap gesture 602 is not limited to the moving distance described in the above embodiment, and as an alternative implementation, the determination condition may be a condition such as a moving speed of the palm slap gesture 602 from the first position 612 to the second position 614. Meanwhile, a certain judgment condition or a combination of multiple judgment conditions may be set as the recognition condition of the palm slapping gesture 602, so as to improve the accuracy of gesture recognition, which is not limited in the embodiment of the present application.
As described above, the operator 102 may close a menu in the simulated scene, or close all sub-menus and multi-level menus, based on making a palm tap gesture 602 in the simulated scene. As shown in fig. 6C, in some embodiments, if the operator 102 wants to close a certain menu (e.g., submenu 608) in the simulated scene, the operator 102 may make a palm tap gesture 602 in the simulated scene based on the hand shape image only for that submenu 608. The system 100 recognizes the palm tap gesture 602 for the submenu 608, closing the submenu 608. In some embodiments, if the operator 102 wants to close all of the menus in the simulated scene, as shown in fig. 6A, the operator 102 may make a palm tap gesture 602 for an overall image of multiple menus generated in the simulated scene based on the hand shape image. The system 100 recognizes a palm tap gesture 602 for the overall image, closing all menus in the simulated scene. In still other embodiments, as shown in fig. 6D, operator 102 may make a palm tap gesture 602 in the simulated scene based on the hand shape image only for the multi-level menu 606 at the lowest level of the tree structure. Since all of the menus in the simulated scene are generated based on the multi-level menu 606, the system 100 may close all of the menus after recognizing the palm tap gesture 602 for the multi-level menu 606.
FIG. 7 illustrates a flow chart of an exemplary multi-level menu based interaction method 700 according to an embodiment of the present application. The method 700 may be implemented by the system 100, for example, by the system 100 generating a screen (e.g., the screen 300 of fig. 3A) including a multi-level menu (e.g., the multi-level menu 304 of fig. 3A) and a hand image, the operator 102 making different gestures to interact with the multi-level menu based on the hand image generated in the simulated scene. In some embodiments, the operator 102 may implement gesture input through a bare hand, and the system 100 may acquire a front image in real time through a video camera or the like disposed in front of the head-mounted wearable device 104 and recognize the gesture of the operator 102 by recognizing the image.
Taking any of the simulation scenarios as an example, a multi-level menu (e.g., multi-level menu 304 of fig. 3A) may include a plurality of operational objects, and method 700 may include the following steps.
In step S702, a type of a first operation object (e.g., the first operation object 3042 of fig. 3A) of the plurality of operation objects is determined in response to identifying the first gesture of the first operation object. The type of the first operation object may include a submenu or an operation corresponding to the first operation object, and different types of operation objects correspond to different display effects.
In some embodiments, the type of the first operation object is determined in response to identifying a pinch gesture (e.g., pinch gesture 302 of fig. 3A) on the first operation object. In some embodiments, the first gesture is determined to be the pinch gesture in response to a first finger (e.g., first finger 3022 of fig. 3A) and a second finger (e.g., second finger 3024 of fig. 3A) being in contact with the first gesture being identified. And determining the type of the selected target operation object by adopting the pinch gesture, and conforming to the use habit and the operation habit of an operator.
In some embodiments, the type of the first operation object is determined in response to a distance between the first gesture and the first operation object being less than or equal to a third distance. The distance between the pinch gesture 302 and the first operation object 3042 is set to be less than or equal to the third distance to accurately respond to the interaction between the pinch gesture 302 and the first operation object 3042 while reducing the false touch rate.
In some embodiments, the multi-level menu (e.g., multi-level menu 304 of fig. 3B) is displayed in response to identifying a third gesture (e.g., spread gesture 312 of fig. 3B) prior to determining the type of the first operand. Based on the operator making the start gesture (expand gesture 312) of the pinch gesture, the expand gesture 312 is set to exhale the multi-level menu 304, saving energy that would otherwise be required to make other gestures of the exhale menu.
In some embodiments, before determining the type of the first operation object, the multi-level menu is displayed at a position corresponding to the third gesture in response to the third gesture remaining stationary for a time greater than or equal to a first time. If only the hold dwell time of the expand gesture 312 is set to determine whether to exhale the multi-level menu 304 is equal to the first time, then after the multi-level menu 304 is exhaled, the system 100 recognizes that the expand gesture 312 remains in the same position for longer than the first time, determines that the condition for displaying the multi-level menu 304 is not satisfied, and the multi-level menu 304 is closed. Thus, the hold dwell time of the open gesture 312 needs to be set to be greater than or equal to the first time to avoid the problem of the multi-level menu 304 being closed due to too long a gesture dwell time.
In step S704, in response to determining that the first operation object is an identification of a first submenu (e.g., identification 3044 of the first submenu of fig. 3D), the first submenu (e.g., first submenu 306 of fig. 3E) is displayed.
In some embodiments, the first submenu (e.g., the first submenu 306 of fig. 3E) is displayed upon display of the multi-level menu (e.g., the multi-level menu 304 of fig. 3E). On the basis that the multi-level menu 304 and the first submenu 306 are displayed simultaneously, when an operator does not find a target operation object in the first submenu 306, the operator can directly search other target operation objects for the multi-level menu 304, so that the problem of high energy consumption caused by the need of switching between menus of different levels to search the target operation object in the related art is avoided.
In some embodiments, in response to determining that the first operational object (e.g., the first operational object 3042 of fig. 3G) is an identification of a first submenu (e.g., the identification 3044 of the first submenu of fig. 3G), the identification of the first submenu is controlled to move following the first gesture (e.g., the pinch gesture 302 of fig. 3G);
the first submenu is displayed in response to the identification of the first submenu moving a distance (e.g., distance 31 of fig. 3G) that is greater than a first distance following the first gesture. The first submenu is displayed after the distance that the identifier of the first submenu moves along with the first gesture is greater than the first distance, so as to avoid the problem that the first submenu 306 cannot be successfully displayed due to the too close distance between the identifier 3044 of the first submenu and the multi-level menu 304.
In some embodiments, in response to identifying a second gesture for identification of the first submenu, the first submenu is displayed at a location corresponding to the second gesture. In some embodiments, in response to identifying a release gesture (e.g., release gesture 322 of fig. 3F) for the identification of the first submenu, the first submenu is displayed at a location corresponding to the release gesture. Based on the pinch gesture 302, the release gesture 322 is set as a judging condition for displaying the first submenu, so that the use habit and the operation habit of an operator are met, and the operation experience of the operator is improved.
In some embodiments, the second gesture is determined to be the unclamping gesture in response to the opening and closing angle (e.g., opening and closing angle 3226 of fig. 3F) of the first finger (e.g., first finger 3222 of fig. 3F) and the second finger (e.g., second finger 3224 of fig. 3F) being within a preset angular range being recognized. The preset angle range can be adjusted adaptively based on the operation habit of the operator, so that the operation experience of the operator is improved.
In some embodiments, after displaying the first submenu (e.g., the first submenu 504 of fig. 5A), a type of a second operational object for the plurality of operational objects is determined in response to identifying a first gesture (e.g., the pinch gesture 512 of fig. 5A) for the second operational object (e.g., the second operational object 5024 of fig. 5A);
in response to determining that the second operation object is an identification of a second submenu, the second submenu (e.g., the second submenu 506 of fig. 5B) is displayed based on the display of the multi-level menu (e.g., the multi-level menu 502 of fig. 5B) and the first submenu (e.g., the first submenu 504 of fig. 5B). The displayed menus can be displayed in a tree structure in a simulation scene, so that an operator can clearly and intuitively view a target operation object. Meanwhile, a plurality of menus displayed in the tree structure are not blocked, so that difficulty in image recognition of submenus by the terminal equipment is reduced. It should be noted that, the display of the multiple menus in the tree structure according to the embodiment of the present application is only exemplary, and other display structures that can prevent the multiple menus from overlapping and shielding should fall within the protection scope of the embodiment of the present application.
In some embodiments, after displaying the first submenu (e.g., first submenu 514 of fig. 5C), determining a type of a first sub-operation object of the plurality of sub-operation objects (e.g., first sub-operation object 5142 of fig. 5C) in response to identifying the first gesture (e.g., pinch gesture 518 of fig. 5C) for the first sub-operation object;
in response to determining that the first sub-operation object is an identification of a third sub-menu, the third sub-menu (e.g., third sub-menu 516 of FIG. 5C) is displayed based on displaying the multi-level menu (e.g., multi-level menu 512 of FIG. 5D) and the first sub-menu (e.g., first sub-menu 514 of FIG. 5D). The displayed menus can be displayed in a tree structure in a simulation scene, so that an operator can clearly and intuitively view a target operation object. Meanwhile, a plurality of menus displayed in the tree structure are not blocked, so that difficulty in image recognition of submenus by the terminal equipment is reduced. It should be noted that, the display of the multiple menus in the tree structure according to the embodiment of the present application is only exemplary, and other display structures that can make the multiple menus not overlap or be blocked, or display structures with similar effects should fall within the protection scope of the embodiment of the present application.
In some embodiments, after the first submenu is displayed, the first submenu or the multi-level menu is closed in response to identifying a fourth gesture (e.g., palm tap gesture 602 of fig. 6A) for the first submenu (e.g., first submenu 608 of fig. 6C) or the multi-level menu (e.g., first submenu 606 of fig. 6D). According to the operation habit of an operator, a single menu can be set to be closed or all the menus can be closed at one time by closing the initial multi-level menu, so that the flexibility of operation is improved, and the use experience of the operator is improved.
In some embodiments, after the first submenu is displayed, the first submenu or the multi-level menu is closed in response to identifying a palm tap gesture (e.g., palm tap gesture 602 of fig. 6A) for the first submenu or the multi-level menu. The palm slap gesture 602 conforms to the operator's operating habits, facilitating the operator's operation.
In some embodiments, after displaying the first submenu, in response to identifying that a palm is moving from a first location (e.g., first location 612 of fig. 6B) to a second location (e.g., second location 614 of fig. 6B) in a direction proximate to the first submenu or the multi-level menu (e.g., multi-level menu 604 of fig. 6B), and that a distance (e.g., distance 616 of fig. 6B) of the second location from the first submenu or the multi-level menu is less than a third distance, the fourth gesture is determined to be the palm slap gesture. The determination condition of the palm slapping gesture 602 is not limited to the moving distance of the palm, but may be the moving speed of the palm or other determination conditions according to the operation habit of the operator, which is not limited in the embodiment of the present application.
In step S706, the first operation is performed in response to determining that the first operation object is an identification of the first operation.
In some embodiments, in response to determining that the first operation object is an identification of a first operation, controlling the identification of the first operation to follow the first gesture movement;
and executing the first operation in response to the first operation identification moving along the first gesture by a distance greater than a second distance. The first operation is executed after the distance that the mark of the first operation moves along with the first gesture is larger than the second distance, so that the problem that the first operation cannot be executed successfully due to the fact that the distance between the mark of the first operation and the multi-level menu is too close is solved.
In some embodiments, the first operation is performed in response to identifying a second gesture that is identified for the first operation. In some embodiments, the first operation is performed in response to identifying a release gesture for the identification of the first operation. Based on the first gesture, the second gesture is set as a judging condition for executing the first operation, so that the using habit and the operating habit of an operator are met, and the operating experience of the operator is improved.
In some embodiments, the second gesture is determined to be the unclamping gesture in response to identifying that the opening and closing angles of the first finger and the second finger of the second gesture are within a preset angular range. The preset angle range can be adjusted adaptively based on the operation habit of the operator, so that the operation experience of the operator is improved.
In some embodiments, after performing the first operation, the multi-level menu is closed. According to the operation habit of an operator, after the first operation is executed, the first operation is considered as a target operation object of the operator, and the multi-level menu is automatically closed at the moment, so that the need of the operator to manually close the multi-level menu is avoided, and the operation of the operator is facilitated.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that some embodiments of the present application are described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same technical conception, the application also provides an interaction device based on the multi-level menu, which corresponds to the method of any embodiment.
Referring to fig. 8, the multi-level menu based interaction device includes:
the recognition module 801 is configured to determine a type of a first operation object of the plurality of operation objects in response to recognizing a first gesture of the first operation object.
In some embodiments, the recognition module 801 is further configured to determine a type of the first operation object in response to recognizing a pinch gesture on the first operation object.
In some embodiments, the recognition module 801 is further configured to determine that the first gesture is the pinch gesture in response to recognizing that the first finger and the second finger of the first gesture are in contact.
In some embodiments, the recognition module 801 is further configured to determine the type of the first operation object in response to a distance between the first gesture and the first operation object being less than or equal to a third distance.
In some embodiments, the recognition module 801 is further configured to display the multi-level menu in response to recognizing the third gesture.
An execution module 802 configured to display the first submenu in response to determining that the first operation object is the first submenu; or alternatively
And executing the first operation in response to determining that the first operation object is the identification of the first operation.
In some embodiments, the execution module 802 is further configured to display the first submenu based on displaying the multi-level menu.
In some embodiments, the execution module 802 is further configured to control, in response to determining that the first operation object is an identification of a first submenu, the identification of the first submenu to follow the first gesture movement;
And displaying the first submenu in response to the identification of the first submenu moving a distance along with the first gesture that is greater than a first distance.
In some embodiments, the execution module 802 is further configured to, in response to identifying a second gesture for the identification of the first submenu, display the first submenu at a location corresponding to the second gesture.
In some embodiments, the execution module 802 is further configured to, in response to identifying a release gesture for the identification of the first submenu, display the first submenu at a location corresponding to the release gesture.
In some embodiments, the execution module 802 is further configured to control, in response to determining that the first operation object is an identification of a first operation, the identification of the first operation to follow the first gesture movement;
and executing the first operation in response to the first operation identification moving along the first gesture by a distance greater than a second distance.
In some embodiments, the execution module 802 is further configured to execute the first operation in response to identifying a second gesture for identification of the first operation.
In some embodiments, the execution module 802 is further configured to execute the first operation in response to identifying a release gesture for the identification of the first operation.
In some embodiments, the executing module 802 is further configured to determine that the second gesture is the unclamping gesture in response to identifying that the opening angle of the first finger and the second finger of the second gesture is within a preset angle range.
In some embodiments, the execution module 802 is further configured to close the multi-level menu.
In some embodiments, the execution module 802 is further configured to determine a type of a second operation object of the plurality of operation objects in response to identifying a first gesture of the second operation object;
and in response to determining that the second operation object is the identification of a second submenu, displaying the second submenu on the basis of displaying the multi-level menu and the first submenu.
In some embodiments, the execution module 802 is further configured to determine a type of a first sub-operation object of the plurality of sub-operation objects in response to identifying a first gesture for the first sub-operation object;
and in response to determining that the first sub-operation object is the identification of a third sub-menu, displaying the third sub-menu on the basis of displaying the multi-level menu and the first sub-menu.
In some embodiments, the execution module 802 is further configured to display the multi-level menu at a location corresponding to the third gesture in response to the third gesture remaining stationary for a time greater than or equal to the first time.
In some embodiments, the execution module 802 is further configured to close the first submenu or the multi-level menu in response to identifying a fourth gesture with respect to the first submenu or the multi-level menu.
In some embodiments, the execution module 802 is further configured to close the first submenu or the multi-level menu in response to identifying a palm tap gesture for the first submenu or the multi-level menu.
In some embodiments, the execution module 802 is further configured to determine that the fourth gesture is the palm slap gesture in response to identifying that the palm is moving from a first position to a second position in a direction proximate to the first submenu or the multi-level menu, and that the second position is less than a third distance from the first submenu or the multi-level menu.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is configured to implement the corresponding interaction method based on the multi-level menu in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same technical concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the interaction method based on the multi-level menu according to any embodiment when executing the program.
Fig. 9 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding interaction method based on the multi-level menu in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same technical concept, corresponding to the method of any embodiment, the application further provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium stores computer instructions, and the computer instructions are used for enabling the computer to execute the interaction method based on the multi-level menu according to any embodiment.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be any method or technology for information storage. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device.
The storage medium of the foregoing embodiment stores computer instructions for causing the computer to execute the interaction method based on the multi-level menu according to any one of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, corresponding to the multi-level menu based interaction method described in any of the above embodiments, the present application also provides a computer program product comprising computer program instructions. In some embodiments, the computer program instructions may be executable by one or more processors of a computer to cause the computer and/or the processor to perform the multi-level menu based interaction method. Corresponding to the execution subject corresponding to each step in each embodiment of the interactive method based on the multi-level menu, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to enable the computer and/or the processor to perform the interaction method based on a multi-level menu according to any one of the above embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements and/or the like which are within the spirit and principles of the embodiments are intended to be included within the scope of the present application.

Claims (24)

1. An interactive method based on a multi-level menu, wherein the multi-level menu comprises a plurality of operation objects, the method comprising:
in response to identifying a first gesture to a first operand of the plurality of operands, determining a type of the first operand;
in response to determining that the first operation object is an identification of a first submenu, displaying the first submenu; or alternatively
And executing the first operation in response to determining that the first operation object is the identification of the first operation.
2. The method of claim 1, wherein the first gesture comprises a pinch gesture, and wherein the determining the type of the first operation object in response to identifying the first gesture for the first operation object of the plurality of operation objects comprises:
in response to identifying a pinch gesture to the first operand, a type of the first operand is determined.
3. The method according to claim 2, wherein the responding to the recognition of the pinch gesture to the first operation object specifically comprises:
in response to identifying that the first finger and the second finger of the first gesture are in contact, determining that the first gesture is the pinch gesture.
4. The method of claim 1, wherein the displaying the first submenu in response to determining that the first operational object is an identification of the first submenu comprises:
and displaying the first submenu on the basis of displaying the multi-level menu.
5. The method of claim 1, wherein the displaying the first submenu in response to determining that the first operational object is an identification of the first submenu comprises:
Controlling the identification of the first submenu to move along with the first gesture in response to determining that the first operation object is the identification of the first submenu;
and displaying the first submenu in response to the identification of the first submenu moving a distance along with the first gesture that is greater than a first distance.
6. The method of claim 5, wherein the displaying the first submenu comprises:
and in response to identifying a second gesture for the identification of the first submenu, displaying the first submenu at a position corresponding to the second gesture.
7. The method of claim 6, wherein the second gesture comprises a unclamp gesture, wherein the displaying the first submenu at a location corresponding to the second gesture in response to identifying the identified second gesture for the first submenu comprises:
and in response to the identification of the release gesture aiming at the identification of the first submenu, displaying the first submenu at a position corresponding to the release gesture.
8. The method of claim 1, wherein the performing the first operation in response to determining that the first operation object is an identification of the first operation comprises:
Responsive to determining that the first operation object is an identification of a first operation, controlling the identification of the first operation to follow the first gesture movement;
and executing the first operation in response to the first operation identification moving along the first gesture by a distance greater than a second distance.
9. The method of claim 8, wherein the performing the first operation comprises:
the first operation is performed in response to identifying a second gesture that is identified for the first operation.
10. The method of claim 9, wherein the second gesture comprises a unclamp gesture, and wherein the performing the first operation in response to identifying the identified second gesture for the first operation comprises:
in response to identifying a release gesture for identification of the first operation, the first operation is performed.
11. The method according to claim 7 or 10, characterized in that the method further comprises:
and determining the second gesture as the loosening gesture in response to the fact that the opening and closing angles of the first finger and the second finger of the second gesture are in a preset angle range.
12. The method of claim 1, wherein the responding to the identification that the first operation object is the first operation, after executing the first operation, comprises:
Closing the multi-level menu.
13. The method of claim 1, wherein the determining the type of the first operation object in response to identifying a first gesture to the first operation object of the plurality of operation objects comprises:
and determining the type of the first operation object in response to the distance between the first gesture and the first operation object being less than or equal to a third distance.
14. The method of claim 4, wherein after displaying the first submenu based on displaying the multi-level menu, the method further comprises:
in response to identifying a first gesture to a second operand of the plurality of operands, determining a type of the second operand;
and in response to determining that the second operation object is the identification of a second submenu, displaying the second submenu on the basis of displaying the multi-level menu and the first submenu.
15. The method of claim 4, wherein the first submenu comprises a plurality of sub-operation objects, and wherein after displaying the first submenu on the basis of displaying the multi-level menu, the method further comprises:
In response to identifying a first gesture to a first sub-operation object of the plurality of sub-operation objects, determining a type of the first sub-operation object;
and in response to determining that the first sub-operation object is the identification of a third sub-menu, displaying the third sub-menu on the basis of displaying the multi-level menu and the first sub-menu.
16. The method of claim 1, wherein, in response to identifying a first gesture to a first operand of the plurality of operands, prior to determining the type of the first operand, the method further comprises:
in response to identifying a third gesture, the multi-level menu is displayed.
17. The method of claim 16, wherein the displaying the multi-level menu in response to identifying a third gesture comprises:
and in response to the third gesture remaining stay for more than or equal to the first time, displaying the multi-level menu at a position corresponding to the third gesture.
18. The method of claim 1, wherein, in response to determining that the first operation object is an identification of a first submenu, after displaying the first submenu, the method further comprises:
Responsive to identifying a fourth gesture directed to the first submenu or the multi-level menu, the first submenu or the multi-level menu is closed.
19. The method of claim 18, wherein the fourth gesture comprises a palm tap gesture, closing the first submenu or the multi-level menu in response to identifying the fourth gesture for the first submenu or the multi-level menu, further comprising:
responsive to identifying a palm tap gesture for the first submenu or the multi-level menu, the first submenu or the multi-level menu is closed.
20. The method of claim 19, wherein responsive to identifying a palm tap gesture for the first submenu or the multi-level menu, specifically comprises:
in response to identifying that the palm is moving from a first position to a second position in a direction proximate to the first submenu or the multi-level menu, and that a distance of the second position from the first submenu or the multi-level menu is less than a third distance, the fourth gesture is determined to be the palm slap gesture.
21. An interactive apparatus based on a multi-level menu, comprising:
An identification module configured to determine a type of a first operation object of the plurality of operation objects in response to identifying a first gesture of the first operation object;
an execution module configured to display a first submenu in response to determining that the first operation object is an identification of the first submenu; or alternatively
And executing the first operation in response to determining that the first operation object is the identification of the first operation.
22. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 20 when the program is executed by the processor.
23. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 20.
24. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 20.
CN202310183890.3A 2023-02-28 2023-02-28 Interaction method based on multi-level menu and related equipment Pending CN116166161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310183890.3A CN116166161A (en) 2023-02-28 2023-02-28 Interaction method based on multi-level menu and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310183890.3A CN116166161A (en) 2023-02-28 2023-02-28 Interaction method based on multi-level menu and related equipment

Publications (1)

Publication Number Publication Date
CN116166161A true CN116166161A (en) 2023-05-26

Family

ID=86419832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310183890.3A Pending CN116166161A (en) 2023-02-28 2023-02-28 Interaction method based on multi-level menu and related equipment

Country Status (1)

Country Link
CN (1) CN116166161A (en)

Similar Documents

Publication Publication Date Title
US11294472B2 (en) Augmented two-stage hand gesture input
US11397463B2 (en) Discrete and continuous gestures for enabling hand rays
US20210263593A1 (en) Hand gesture input for wearable system
EP3090331B1 (en) Systems with techniques for user interface control
CN105518575B (en) With the two handed input of natural user interface
US20200225830A1 (en) Near interaction mode for far virtual object
US20140184494A1 (en) User Centric Interface for Interaction with Visual Display that Recognizes User Intentions
Bai et al. Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces
US20200301513A1 (en) Methods for two-stage hand gesture input
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
US20120268359A1 (en) Control of electronic device using nerve analysis
US10528145B1 (en) Systems and methods involving gesture based user interaction, user interface and/or other features
US20230244379A1 (en) Key function execution method and apparatus, device, and storage medium
US20170131785A1 (en) Method and apparatus for providing interface interacting with user by means of nui device
US9958946B2 (en) Switching input rails without a release command in a natural user interface
CN109828672A (en) It is a kind of for determining the method and apparatus of the human-machine interactive information of smart machine
CN109960404B (en) Data processing method and device
CN115480639A (en) Human-computer interaction system, human-computer interaction method, wearable device and head display device
US11782548B1 (en) Speed adapted touch detection
CN116166161A (en) Interaction method based on multi-level menu and related equipment
CN113672158A (en) Human-computer interaction method and device for augmented reality
KR101962464B1 (en) Gesture recognition apparatus for functional control
CN114546103A (en) Operation method through gestures in augmented reality and head-mounted display system
CN107977071B (en) Operation method and device suitable for space system
JP2023143634A (en) Control apparatus, control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination