CN116107464A - Interaction method, interaction device, electronic equipment, readable storage medium and chip - Google Patents

Interaction method, interaction device, electronic equipment, readable storage medium and chip Download PDF

Info

Publication number
CN116107464A
CN116107464A CN202310214792.1A CN202310214792A CN116107464A CN 116107464 A CN116107464 A CN 116107464A CN 202310214792 A CN202310214792 A CN 202310214792A CN 116107464 A CN116107464 A CN 116107464A
Authority
CN
China
Prior art keywords
control
information
interaction
virtual
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310214792.1A
Other languages
Chinese (zh)
Inventor
刘悦琳
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310214792.1A priority Critical patent/CN116107464A/en
Publication of CN116107464A publication Critical patent/CN116107464A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The disclosure provides an interaction method, an interaction device, electronic equipment, a readable storage medium and a chip, and belongs to the technical field of virtual reality. The interaction method is applied to virtual reality equipment, and comprises the following steps: under the condition that the virtual reality equipment displays a virtual interaction interface, acquiring first detection information, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface; displaying preset guide information corresponding to a target control in the virtual interaction interface under the condition that the first detection information characterizes that the target user has interaction intention to the target control, wherein the preset guide information comprises multimedia content of interaction action corresponding to the target control; and triggering the target control to execute a function corresponding to the target control under the condition that the interaction information input by the target user is received and the interaction information is matched with the preset guide information.

Description

Interaction method, interaction device, electronic equipment, readable storage medium and chip
Technical Field
The disclosure relates to the technical field of virtual reality, in particular to an interaction method, an interaction device, electronic equipment, a readable storage medium and a chip.
Background
With the development of technology, virtual Reality (VR) has been applied in various fields such as games, designs, and sports. The VR equipment in the related art mostly adopts the head display to be matched with two handles for use, namely, a control instruction is input to the VR equipment through the handles so as to complete the interaction process. However, the handle control mode is inconvenient to carry, and the operation experience in the use process of the user is poor.
Disclosure of Invention
The disclosure provides an interaction method, an interaction device, an electronic device, a readable storage medium and a chip, which can improve the interaction effect of a virtual reality device.
In a first aspect, an embodiment of the present disclosure provides an interaction method applied to a virtual reality device, where the method includes:
under the condition that the virtual reality equipment displays a virtual interaction interface, acquiring first detection information, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface;
displaying preset guiding information corresponding to a target control in the virtual interactive interface under the condition that the first detection information characterizes that the target user has interactive intention to the target control, wherein the target control is a control in the virtual interactive interface, and the preset guiding information comprises multimedia content of interactive action corresponding to the target control;
And triggering the target control to execute a function corresponding to the target control under the condition that the interaction information input by the target user is received and the interaction information is matched with the preset guide information.
In a second aspect, an embodiment of the present disclosure provides an interaction apparatus, which is applied to a virtual reality device, where the apparatus includes:
the acquisition module is used for acquiring first detection information under the condition that the virtual reality equipment displays a virtual interaction interface, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface;
the display module is used for displaying preset guide information corresponding to a target control in the virtual interaction interface under the condition that the first detection information characterizes that the target user has the interaction intention of the target control, wherein the target control is a control in the virtual interaction interface, and the preset guide information comprises multimedia content of interaction actions corresponding to the target control;
and the execution module is used for triggering the target control to execute the function corresponding to the target control under the condition that the interaction information input by the target user is received and the interaction information is matched with the preset guide information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a program or an instruction stored on the memory and executable on the processor, where the program or the instruction implements the steps of the method described in the first aspect when executed by the processor.
In a fourth aspect, an embodiment of the present disclosure provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, where the program or the instruction implements the steps of the method described in the first aspect when executed by a processor.
In a fifth aspect, embodiments of the present disclosure provide a chip, the chip including a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a program or instructions, the program or instructions, when executed by the processor, implementing the steps in the method described in the first aspect.
In the embodiment of the disclosure, the user can interact with the virtual interaction interface through the action, and compared with a mode of adopting handle interaction, the method is beneficial to improving portability of the VR device because a handle matched with the VR device is not required to be arranged; meanwhile, in the interaction process, a user does not need to hold a handle, so that the operation experience of the user can be improved. In addition, under the condition that the user has the interaction intention is detected, the preset guide information is displayed for the user so as to prompt the user to input the corresponding interaction action, so that the user can smoothly complete the interaction process without learning the interaction action in advance, and further improvement of the interaction effect is facilitated.
Drawings
FIG. 1 is one of the flow diagrams of the interaction method provided by the embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a virtual interactive interface in an embodiment of the present disclosure;
FIG. 3 is a second flow chart of an interaction method according to an embodiment of the disclosure;
FIG. 4 is a schematic structural diagram of an interaction device provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, where appropriate, such that embodiments of the disclosure may be practiced in sequences other than those illustrated and described herein, and that the objects identified by "first," "second," etc. are generally of the same type and are not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The following describes in detail, by means of specific embodiments and application scenarios thereof, an interaction method, an interaction device, an electronic device, a readable storage medium and a chip provided by the embodiments of the present disclosure with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flow chart of an interaction method provided in an embodiment of the disclosure, where the interaction method includes the following steps:
step 101, under the condition that the virtual reality device displays a virtual interaction interface, first detection information is obtained, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface.
The above-described virtual reality device may be various types of virtual reality devices of scenes in the related art, and for example, the virtual reality device may be a head-mounted VR device. Accordingly, the target user is the user wearing the virtual reality device. The virtual interactive interface is a virtual interface suspended in the eyes and seen by the user through the virtual reality device after the user wears the virtual reality device.
The virtual reality device may include an image acquisition device for acquiring interaction information input by the target user. The first detection information may be detection information obtained by detecting the interaction information acquired by the image acquisition device. Alternatively, the first detection information may be interaction information acquired by the image acquisition device. The image pickup device may be an image pickup device commonly known in the related art, for example, may be a camera or the like.
The first detection information may characterize whether the target user has an interaction intention for a control in the virtual interaction interface. Specifically, corresponding discrimination conditions may be preset to determine whether the target user has an interaction intention for the control in the virtual interaction interface. For example, the discrimination condition may be a specific gesture input by the target user, or may be a relative position of the target user and the virtual interactive interface, or the like.
The virtual reality device can be applied to different virtual reality scenes, for example, various game scenes, various motion scenes, various office scenes and the like. Accordingly, virtual interactive interfaces corresponding to different virtual reality scenes may be different.
At least one control may be included in the virtual interactive interface, where the control may be a control that a user controls content displayed in the virtual interactive interface, and for example, may include various types of click controls and slide controls. Accordingly, the target user can interact with the virtual interaction interface through the control in the virtual interaction interface.
Step 102, displaying preset guiding information corresponding to a target control in the virtual interactive interface under the condition that the first detection information characterizes that the target user has interactive intention to the target control, wherein the target control is a control in the virtual interactive interface, and the preset guiding information comprises multimedia content of interactive actions corresponding to the target control.
The preset guiding information may be various forms of guiding information, for example, visual guiding information, auditory guiding information, etc. When the preset guiding information is visual guiding information, the multimedia content may be: animation containing a specific interaction, pictures containing a specific interaction, text for describing a specific interaction, etc. When the preset guidance information is auditory guidance information, the multimedia content may be audio data for describing a specific interactive action. In addition, the preset guiding information can also contain the visual guiding information and the auditory guiding information at the same time, so that the guiding effect is further improved.
It is understood that the first detection information may characterize an interaction intention of the target user with respect to each control in the virtual interaction interface. Specifically, the target user may uniformly determine the interaction intention of each control in the virtual interaction interface, that is, when it is determined that the target user has the interaction intention on the control in the virtual interaction interface, preset guiding information corresponding to each control in the virtual interaction interface is displayed, and at this time, each control in the virtual interaction interface is used as the target control.
In addition, the interaction intention of the target user on each control in the virtual interaction interface can be independently judged, namely, the interaction intention of the target user on each control in the virtual interaction interface is respectively judged, so that the target control is determined in the virtual interaction interface, the virtual interaction interface corresponding to the target control is displayed, and at the moment, the target control is a specific control in the virtual interaction interface. The preset guiding information may be displayed in a manner of being suspended on an upper layer of the target control, or the preset guiding information may be displayed in a blank area at a side portion of the target control.
Step 103, triggering the target control to execute the function corresponding to the target control when the interactive action information input by the target user is received and the interactive action information is matched with the preset guiding information.
Specifically, after the preset guiding information is displayed to the target user, the target user may input an interactive action corresponding to the preset guiding information based on the guidance of the preset guiding information, and the virtual reality device may collect the interactive action based on the image collecting device to generate the interactive action information. Wherein the interactive motion may be various types of limb motion.
In the embodiment, the user can interact with the virtual interaction interface through actions, and compared with a mode of adopting handle interaction, the method is beneficial to improving portability of the VR equipment because a handle matched with the VR equipment is not required to be arranged; meanwhile, in the interaction process, a user does not need to hold a handle, so that the operation experience of the user can be improved. In addition, under the condition that the user has the interaction intention is detected, the preset guide information is displayed for the user so as to prompt the user to input the corresponding interaction action, so that the user can smoothly complete the interaction process without learning the interaction action in advance, and further improvement of the interaction effect is facilitated.
Optionally, the first detection information includes a first distance value, where the first distance value is a distance value between a hand of the target user and the target control, and before the preset guiding information corresponding to the target control is displayed in the virtual interaction interface, the method further includes:
and under the condition that the first distance value is smaller than or equal to a preset threshold value, determining that the target user has interaction intention for the target control.
Specifically, the image acquisition device in the virtual reality device may directly detect the relative position information between the specific position (for example, the end of the index finger) of the hand of the target user and the image acquisition device, while the relative position between the virtual reality device and the virtual interaction interface is relatively fixed, and the display positions of the controls in the virtual interaction interface are relatively fixed, so that the virtual reality device may directly obtain the relative position information between the target control and the virtual reality device. The virtual reality device may then determine the first distance value based on relative position information between the specific position of the target user's hand and the image capture device, and relative position information between the target control and the virtual reality device. In addition, the relative position information between the specific position of the hand of the target user and the image acquisition device may be acquired based on the following means: the image acquisition device acquires an image including the hand of the target user, and then identifies the acquired image based on a preset image identification algorithm to obtain relative position information between the specific position of the hand of the target user and the image acquisition device.
After obtaining the first distance value, the first distance value may be compared with the preset threshold value to determine whether the target user has an interaction intention with the target control. Specifically, when the first distance value is smaller than or equal to the preset threshold value, determining that the target user has interaction intention to the target control; accordingly, when the first distance value is greater than the preset threshold value, it is determined that the target user does not have an interaction intention with the target control.
The preset threshold may be a relatively small distance value, for example, the preset threshold may be any value from 3cm to 13 cm, that is, when the hand of the target user approaches the target control, it is determined that the target user has an interaction intention with the target control.
It is appreciated that whether the target user has an interaction intent for each control in the virtual interactive interface may be determined based on the above-described method.
In this embodiment, by using the distance between the hand of the target user and the control in the virtual interactive interface, it is determined whether the target user has an interactive intention to the control in the virtual interactive interface, so that when the hand of the target user approaches the control, it is determined that the target user has an interactive intention to the control. And when the hand of the target user is far away from the control, determining that the target user does not have the interaction intention for the control. Thereby being beneficial to improving the accuracy of judging the intention of the user.
In addition, in another embodiment of the present disclosure, a virtual control icon may also be displayed in the virtual interactive interface, where the virtual control icon corresponds to the hand of the target user; the motion corresponding to the hand of the target user may be output by detecting motion state information of the hand of the target user and controlling the virtual control icon based on the motion state information. The virtual control icon can be adopted to simulate the hand of the target user, so that the user can drive the virtual control icon to move in the virtual interactive interface by waving the hand. In this case, the first detection information may include position information of the virtual control icon in the virtual interactive interface, and when the position information characterizes that the virtual control icon overlaps with a target control position, it is determined that the target user has an interactive intention with the target control.
In another embodiment of the present disclosure, the first detection information may include distance information between the virtual control icon and each control in the virtual interaction interface, and when the distance information characterizes that a distance between the virtual control icon and a target control is smaller than a preset distance value, it is determined that the target user has an interaction intention with the target control.
Optionally, after the preset guiding information corresponding to the target control is displayed in the virtual interactive interface, the method further includes:
acquiring second detection information, wherein the second detection information comprises a second distance value, and the second distance value is a distance value between the hand of the target user and the target control;
and hiding the preset guide information under the condition that the second distance value is larger than a preset distance.
Specifically, after the virtual interactive interface is displayed, the distance between the hand of the target user and each control in the virtual interactive interface can be detected in real time. After detecting that the target user has the interaction intention to the target control, preset guiding information corresponding to the target control can be displayed nearby the target control, and states of other controls except the target control are kept unchanged. After the preset guiding information corresponding to the target control is displayed, whether the target user inputs the interaction information or not can be detected, and on the other hand, the second distance value can be detected to determine whether the hand of the target user is far away from the target control or not. And under the condition that the second distance value is larger than the preset distance, determining that the hand of the target user is far away from the target control, and determining that the target user does not have the interaction intention to the target control at the moment, so that the preset guide information can be hidden.
The generating process of the second detection information is similar to the generating process of the first detection information, and the determining process of the second distance value is similar to the determining process of the first distance value, so that repetition is avoided and no description is given here.
In this embodiment, after the preset guiding information corresponding to the target control is displayed, the second detection information is further obtained to determine whether the hand of the target user is far away from the target control, if yes, the preset guiding information is hidden, so that other information in the virtual interactive interface is prevented from being blocked due to the fact that the preset guiding information is always displayed, and further the display effect of the virtual interactive interface is improved.
In addition, in another embodiment of the present disclosure, the second detection information may include position information of the virtual control icon in the virtual interactive interface, and when the position information characterizes that the virtual control icon and the target control position are switched from an overlapping state to a non-overlapping state, the preset guiding information is hidden.
In another embodiment of the present disclosure, the second detection information may include distance information between the virtual control icon and each control in the virtual interactive interface, and when the distance information characterizes that a distance between the virtual control icon and the target control is greater than or equal to a preset distance value, the preset guide information is hidden.
Optionally, after hiding the preset guiding information, the method further includes:
displaying a virtual control icon in the virtual interactive interface, wherein the virtual control icon corresponds to the hand of the target user;
acquiring third detection information, wherein the third detection information comprises motion state information of the hand of the target user;
and under the condition that the third detection information characterizes that the hand of the target user is in a moving state, controlling the virtual control icon to move in the virtual interactive interface based on the third detection information, wherein the moving state of the virtual control icon is matched with the moving state of the hand of the target user.
The hiding the preset guiding information may mean that: after the preset guiding information of each control in the virtual interactive interface is hidden, at this time, it can be determined that the target user does not have an operation intention for displaying the control in the virtual interactive interface. In this case, the virtual control icon may be displayed, wherein the virtual control icon may simulate a human hand by a user, i.e., the target user may control the virtual control icon to move in the virtual interactive interface by waving the hand. Wherein the virtual control icon may be various types of cursors in the related art, for example, please refer to fig. 2, and in one embodiment of the present disclosure, the virtual control icon may be an icon 201 of a hand pattern.
The third detection information may be detection information generated based on an image acquired by the image acquisition device, and specifically, image recognition may be performed on a real-time image acquired by the image acquisition device to determine whether the hand of the target user is in a moving state. In addition, when the target user wears the digital glove, the third detection information may be motion state information of the hand of the target user acquired based on the digital glove. The digital glove may be various types of digital gloves capable of detecting a change in the position of a user's hand in the related art.
It may be appreciated that the matching of the movement state of the virtual control icon with the movement state of the hand of the target user may specifically refer to: the moving direction of the virtual control icon is the same as the moving direction of the hand of the target user, the moving amount of the virtual control icon is the same as the moving amount of the hand of the target user, and the preset proportional relation can be selected according to actual conditions.
In addition, when it is determined that the target user has an interactive intention to the target control based on the above method, the virtual control icon may be located at the target control, and the virtual control icon may be in a hidden state.
In the embodiment, the hand of the target user is simulated based on the virtual control icon, so that the target user can intuitively view the currently selected position based on the virtual control icon, and the interaction effect is further improved.
Optionally, the virtual interactive interface includes a first control and a second control, where preset guiding information corresponding to the first control is different from preset guiding information corresponding to the second control.
In particular, the first control and the second control may be two different controls in the virtual interactive interface. For example, referring to fig. 2, the virtual interactive interface includes a mail icon control 202 and a slider control 203, and when the middle view in fig. 2 is that the target control is the mail icon control 202, the preset guiding information includes: the animation content of the click mail icon control 202, wherein the functions corresponding to the click mail icon are as follows: the mailbox is opened. When the right side view in fig. 2 is that the target control is the slider control 203, the preset guiding information includes: and sliding the animation content of the slider control 203, wherein the functions corresponding to the slider control 203 are as follows: dragging the currently displayed interface to slide.
In addition, in an embodiment of the present disclosure, in the virtual interactive interface, the content of the preset guiding information of the partial control may be the same, and at the same time, the content of the preset guiding information of the partial control may also be different. And preset guiding information of each control is respectively displayed near the corresponding control.
In this embodiment, different preset guiding information may be set for different controls, that is, trigger actions of different controls may be different, which is favorable to avoiding false triggering of the controls.
Optionally, the preset guiding information includes preset gesture information, the interactivity information includes target gesture information input by the target user, and before the target control is triggered, the method further includes:
and under the condition that the target gesture information is matched with the preset gesture information, determining that the interaction information is matched with the preset guiding information.
In this embodiment, corresponding preset gesture information may be set for each control in the virtual interactive interface, and when the target user has an interaction intention for a control, a preset gesture animation corresponding to the accessory is displayed near the control, so that the target user inputs the corresponding target gesture information according to the preset gesture animation, so as to trigger a function corresponding to the control.
Referring to fig. 3, fig. 3 is a flowchart of an interaction method according to another embodiment of the disclosure, where the interaction method includes the following steps: the user enters VR, namely wears VR equipment; presenting a virtual interactive interface, identifying and presenting a user hand image (i.e. presenting the virtual control icons); detecting the distance between a user index finger and a control in an interface to obtain a first distance value; returning to the previous step under the condition that the first distance value is larger than a preset distance; judging the interaction mode of the current button (namely the current control) under the condition that the first distance value is smaller than or equal to a preset distance; the gesture corresponding to the current button is presented (namely, the preset guiding information is displayed); detecting whether a user makes a corresponding functional gesture or not, and detecting the distance between the index finger of the user and the button; returning to the step of presenting a virtual interactive interface, identifying and presenting a user hand image when the distance between the index finger and the button exceeds the preset distance; in case that the user makes a corresponding function gesture is detected, a function corresponding to the button is performed.
In summary, according to the interaction scheme provided by the embodiment of the present disclosure, since preset guiding information can be displayed in the interaction process, the interaction scheme can be used without pre-learning interaction actions, so that the learning cost of the user in the gesture interaction process can be reduced to a great extent. Specifically, firstly, a user obtains clear and concise guidance in the gesture interaction process, does not need to deliberately memorize corresponding functional gestures, does not need to deliberately exit from the current experience, and enters an independent gesture teaching plate for learning. Secondly, when the product is used again, the user can also quickly know the functional gesture information adopted by the new product, so that unnecessary attempts and extra learning time are avoided. To sum up, the user can interact with gestures more easily, and the immersion and comfort of the user in VR experience are greatly improved through a natural and convenient gesture interaction process
Referring to fig. 4, fig. 4 is a schematic structural diagram of an interaction device 400 provided in an embodiment of the disclosure, where the interaction device 400 includes:
the obtaining module 401 is configured to obtain first detection information, where the virtual reality device displays a virtual interaction interface, where the first detection information is used to characterize an interaction intention of a target user on a control in the virtual interaction interface;
the display module 402 is configured to display preset guiding information corresponding to a target control in the virtual interaction interface when the first detection information characterizes that the target user has an interaction intention to the target control, where the target control is a control in the virtual interaction interface, and the preset guiding information includes multimedia content of an interaction action corresponding to the target control;
and the execution module 403 is configured to trigger the target control to execute a function corresponding to the target control when the interaction information input by the target user is received and the interaction information is matched with the preset guiding information.
Optionally, the first detection information includes a first distance value, where the first distance value is a distance value between a hand of the target user and the target control, and the apparatus further includes:
And the first determining module is used for determining that the target user has the interaction intention for the target control under the condition that the first distance value is smaller than or equal to a preset threshold value.
Optionally, the obtaining module 401 is further configured to obtain second detection information, where the second detection information includes a second distance value, and the second distance value is a distance value between a hand of the target user and the target control;
the display module 402 is further configured to conceal the preset guidance information if the second distance value is greater than a preset distance.
Optionally, the display module 402 is further configured to display a virtual control icon in the virtual interactive interface, where the virtual control icon corresponds to a hand of the target user;
the obtaining module 401 is further configured to obtain third detection information, where the third detection information includes motion state information of a hand of the target user;
the execution module 403 is further configured to control the virtual control icon to move in the virtual interactive interface based on the third detection information when the third detection information indicates that the hand of the target user is in a moving state, where the moving state of the virtual control icon matches the moving state of the hand of the target user.
Optionally, the virtual interactive interface includes a first control and a second control, where preset guiding information corresponding to the first control is different from preset guiding information corresponding to the second control.
Optionally, the preset guiding information includes preset gesture information, the interactivity information includes target gesture information input by the target user, and the device further includes:
and the second determining module is used for determining that the interaction information is matched with the preset guiding information under the condition that the target gesture information is matched with the preset gesture information.
In the embodiment, the user can interact with the virtual interaction interface through actions, and compared with a mode of adopting handle interaction, the method is beneficial to improving portability of the VR equipment because a handle matched with the VR equipment is not required to be arranged; meanwhile, in the interaction process, a user does not need to hold a handle, so that the operation experience of the user can be improved. In addition, under the condition that the user has the interaction intention is detected, the preset guide information is displayed for the user so as to prompt the user to input the corresponding interaction action, so that the user can smoothly complete the interaction process without learning the interaction action in advance, and further improvement of the interaction effect is facilitated.
As shown in fig. 5, the embodiment of the present disclosure further provides an electronic device 500, including a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and capable of running on the processor 501, where the program or the instruction implements each process of the above-mentioned interaction method embodiment when executed by the processor 501, and the process can achieve the same technical effect, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, the electronic device in the embodiment of the disclosure includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the disclosure.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
The processor 610 is configured to obtain first detection information when the virtual reality device displays a virtual interaction interface, where the first detection information is used to characterize an interaction intention of a target user to a control in the virtual interaction interface;
The display unit 606 is configured to display preset guiding information corresponding to a target control in the virtual interaction interface when the first detection information characterizes that the target user has an interaction intention to the target control, where the target control is a control in the virtual interaction interface, and the preset guiding information includes multimedia content of an interaction action corresponding to the target control;
the processor 610 is configured to trigger the target control to execute a function corresponding to the target control when interaction information input by the target user is received and the interaction information is matched with the preset guiding information.
Optionally, the first detection information includes a first distance value, where the first distance value is a distance value between a hand of the target user and the target control, and the processor 610 is configured to determine that the target user has an interaction intention with the target control if the first distance value is less than or equal to a preset threshold.
Optionally, the processor 610 is configured to obtain second detection information, where the second detection information includes a second distance value, and the second distance value is a distance value between a hand of the target user and the target control;
The display unit 606 is configured to conceal the preset guidance information when the second distance value is greater than a preset distance.
Optionally, the display unit 606 is configured to display a virtual control icon in the virtual interactive interface, where the virtual control icon corresponds to a hand of the target user;
the processor 610 is configured to obtain third detection information, where the third detection information includes motion state information of a hand of the target user;
the processor 610 is configured to control the virtual control icon to move within the virtual interactive interface based on the third detection information when the third detection information indicates that the hand of the target user is in a moving state, where the moving state of the virtual control icon matches the moving state of the hand of the target user.
Optionally, the virtual interactive interface includes a first control and a second control, where preset guiding information corresponding to the first control is different from preset guiding information corresponding to the second control.
Optionally, the preset guiding information includes preset gesture information, the interactivity information includes target gesture information input by the target user, and the processor 610 is configured to determine that the interactivity information matches the preset guiding information if the target gesture information matches the preset gesture information.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present disclosure, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present disclosure further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the above-mentioned interaction method embodiment when executed by a processor, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the disclosure further provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, implement each process of the above-mentioned interaction method embodiment, and achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present disclosure may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present disclosure is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present disclosure.
The embodiments of the present disclosure have been described above with reference to the accompanying drawings, but the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which are all within the protection of the present disclosure.

Claims (15)

1. An interaction method, applied to a virtual reality device, the method comprising:
under the condition that the virtual reality equipment displays a virtual interaction interface, acquiring first detection information, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface;
displaying preset guiding information corresponding to a target control in the virtual interactive interface under the condition that the first detection information characterizes that the target user has interactive intention to the target control, wherein the target control is a control in the virtual interactive interface, and the preset guiding information comprises multimedia content of interactive action corresponding to the target control;
and triggering the target control to execute a function corresponding to the target control under the condition that the interaction information input by the target user is received and the interaction information is matched with the preset guide information.
2. The method of claim 1, wherein the first detection information includes a first distance value, the first distance value being a distance value between a hand of the target user and the target control, the method further comprising, prior to displaying preset guide information corresponding to the target control in the virtual interactive interface:
And under the condition that the first distance value is smaller than or equal to a preset threshold value, determining that the target user has interaction intention for the target control.
3. The method of claim 1, wherein after displaying the preset guiding information corresponding to the target control in the virtual interactive interface, the method further comprises:
acquiring second detection information, wherein the second detection information comprises a second distance value, and the second distance value is a distance value between the hand of the target user and the target control;
and hiding the preset guide information under the condition that the second distance value is larger than a preset distance.
4. A method according to claim 3, wherein after said hiding said preset guidance information, the method further comprises:
displaying a virtual control icon in the virtual interactive interface, wherein the virtual control icon corresponds to the hand of the target user;
acquiring third detection information, wherein the third detection information comprises motion state information of the hand of the target user;
and under the condition that the third detection information characterizes that the hand of the target user is in a moving state, controlling the virtual control icon to move in the virtual interactive interface based on the third detection information, wherein the moving state of the virtual control icon is matched with the moving state of the hand of the target user.
5. The method of claim 1, wherein the virtual interactive interface comprises a first control and a second control, wherein preset guiding information corresponding to the first control is different from preset guiding information corresponding to the second control.
6. The method of claim 1, wherein the preset guidance information comprises preset gesture information, the interactivity information comprises target gesture information entered by the target user, and the method further comprises, prior to the triggering the target control:
and under the condition that the target gesture information is matched with the preset gesture information, determining that the interaction information is matched with the preset guiding information.
7. An interaction device for application to a virtual reality apparatus, the device comprising:
the acquisition module is used for acquiring first detection information under the condition that the virtual reality equipment displays a virtual interaction interface, wherein the first detection information is used for representing the interaction intention of a target user on a control in the virtual interaction interface;
the display module is used for displaying preset guide information corresponding to a target control in the virtual interaction interface under the condition that the first detection information characterizes that the target user has the interaction intention of the target control, wherein the target control is a control in the virtual interaction interface, and the preset guide information comprises multimedia content of interaction actions corresponding to the target control;
And the execution module is used for triggering the target control to execute the function corresponding to the target control under the condition that the interaction information input by the target user is received and the interaction information is matched with the preset guide information.
8. The apparatus of claim 7, wherein the first detection information comprises a first distance value, the first distance value being a distance value between the hand of the target user and the target control, the apparatus further comprising:
and the first determining module is used for determining that the target user has the interaction intention for the target control under the condition that the first distance value is smaller than or equal to a preset threshold value.
9. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the acquisition module is further configured to acquire second detection information, where the second detection information includes a second distance value, and the second distance value is a distance value between a hand of the target user and the target control;
the display module is further configured to conceal the preset guidance information when the second distance value is greater than a preset distance.
10. The apparatus of claim 9, wherein the display module is further configured to display a virtual control icon in the virtual interactive interface, the virtual control icon corresponding to a hand of the target user;
The acquisition module is further configured to acquire third detection information, where the third detection information includes motion state information of a hand of the target user;
the execution module is further configured to control the virtual control icon to move in the virtual interaction interface based on the third detection information when the third detection information characterizes that the hand of the target user is in a moving state, where the moving state of the virtual control icon is matched with the moving state of the hand of the target user.
11. The apparatus of claim 7, wherein the virtual interactive interface comprises a first control and a second control, wherein the preset guidance information corresponding to the first control is different from the preset guidance information corresponding to the second control.
12. The apparatus of claim 7, wherein the preset guidance information comprises preset gesture information, the interactivity information comprises target gesture information entered by the target user, the apparatus further comprising:
and the second determining module is used for determining that the interaction information is matched with the preset guiding information under the condition that the target gesture information is matched with the preset gesture information.
13. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor implements the steps of the method of any one of claims 1 to 6.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the method of any of claims 1 to 6.
15. A chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a program or instruction which when executed by the processor performs the steps of the method of any of claims 1 to 6.
CN202310214792.1A 2023-03-07 2023-03-07 Interaction method, interaction device, electronic equipment, readable storage medium and chip Pending CN116107464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310214792.1A CN116107464A (en) 2023-03-07 2023-03-07 Interaction method, interaction device, electronic equipment, readable storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310214792.1A CN116107464A (en) 2023-03-07 2023-03-07 Interaction method, interaction device, electronic equipment, readable storage medium and chip

Publications (1)

Publication Number Publication Date
CN116107464A true CN116107464A (en) 2023-05-12

Family

ID=86261686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310214792.1A Pending CN116107464A (en) 2023-03-07 2023-03-07 Interaction method, interaction device, electronic equipment, readable storage medium and chip

Country Status (1)

Country Link
CN (1) CN116107464A (en)

Similar Documents

Publication Publication Date Title
US11818455B2 (en) Devices, methods, and graphical user interfaces for depth-based annotation
US10126826B2 (en) System and method for interaction with digital devices
US10257423B2 (en) Method and system for determining proper positioning of an object
US10268339B2 (en) Enhanced camera-based input
KR101292467B1 (en) Virtual controller for visual displays
US7849421B2 (en) Virtual mouse driving apparatus and method using two-handed gestures
AU2010366331B2 (en) User interface, apparatus and method for gesture recognition
CN108052202A (en) A kind of 3D exchange methods, device, computer equipment and storage medium
CN105229582A (en) Based on the gestures detection of Proximity Sensor and imageing sensor
CN103858073A (en) Touch free interface for augmented reality systems
GB2490199A (en) Two hand control of displayed content
KR20140035358A (en) Gaze-assisted computer interface
GB2483168A (en) Controlling movement of displayed object based on hand movement and size
JP2004246578A (en) Interface method and device using self-image display, and program
CN103543825B (en) Camera cursor system
CN116107464A (en) Interaction method, interaction device, electronic equipment, readable storage medium and chip
CN115729434A (en) Writing and drawing content display method and related equipment
Tang et al. CUBOD: a customized body gesture design tool for end users
Dave et al. Project MUDRA: Personalization of Computers using Natural Interface
CN117461014A (en) Interactive customization of large format display devices
CN116166161A (en) Interaction method based on multi-level menu and related equipment
Ackovska Gesture recognition solution for presentation control
CN116301545A (en) Control method and device
CN113641275A (en) Interface control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination