WO2024037565A1 - 人机交互方法、基于虚拟现实空间的显示处理方法、模型显示方法、装置、设备及介质 - Google Patents
人机交互方法、基于虚拟现实空间的显示处理方法、模型显示方法、装置、设备及介质 Download PDFInfo
- Publication number
- WO2024037565A1 WO2024037565A1 PCT/CN2023/113360 CN2023113360W WO2024037565A1 WO 2024037565 A1 WO2024037565 A1 WO 2024037565A1 CN 2023113360 W CN2023113360 W CN 2023113360W WO 2024037565 A1 WO2024037565 A1 WO 2024037565A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gift
- virtual
- display
- space
- layer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 152
- 230000003993 interaction Effects 0.000 title claims abstract description 98
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 230000000694 effects Effects 0.000 claims abstract description 82
- 230000004044 response Effects 0.000 claims abstract description 61
- 238000003860 storage Methods 0.000 claims abstract description 27
- 230000002452 interceptive effect Effects 0.000 claims description 57
- 238000004590 computer program Methods 0.000 claims description 44
- 230000008859 change Effects 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 18
- 230000014509 gene expression Effects 0.000 claims description 12
- 238000012790 confirmation Methods 0.000 claims description 11
- 238000009877 rendering Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 36
- 230000008569 process Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 29
- 238000005516 engineering process Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 7
- 230000001960 triggered effect Effects 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000002708 enhancing effect Effects 0.000 description 4
- 238000007667 floating Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000001483 mobilizing effect Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000011065 in-situ storage Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
Definitions
- This disclosure is an application with a CN application number of 202210993897.7 and a filing date of August 18, 2022.
- the CN application number is 202210995392.4 and an application with a filing date of August 18, 2022.
- the CN application number is 202211124457.4 and the filing date is 2022. Based on the application filed on September 15, 2017, and claiming the priority thereof, the disclosure content of the CN application is hereby incorporated into the present disclosure as a whole.
- the present disclosure relates to the fields of virtual reality technology, communication technology and extended reality (XR) technology, and in particular to a display processing method, device, equipment and medium based on virtual reality space, a model display method and device based on virtual reality space , equipment and media, and a human-computer interaction method, device, equipment and storage medium.
- XR extended reality
- VR Virtual Reality
- VR also known as virtual environment, spiritual environment or artificial environment, refers to the use of computers to generate a virtual environment that can directly impose visual, auditory and tactile sensations on participants and allow them to observe and operate interactively. world technology.
- live video can be displayed based on virtual reality technology.
- virtual reality technology In the process of displaying videos, how to make full use of the depth information in the virtual reality space to improve the user's interactive experience is a mainstream demand.
- XR technology allows users to immersively watch various virtual live broadcasts. For example, users can experience real live interactive scenes by wearing a head-mounted display (HMD).
- HMD head-mounted display
- Embodiments of the present disclosure provide a display processing method based on a virtual reality space.
- the method includes the following steps: in response to a user's operation of entering the virtual reality space, presenting virtual reality video information in a virtual screen in the virtual reality space. ; And, multiple layers are respectively displayed in multiple spatial location areas in the virtual reality space; wherein each of the spatial location areas has a different display distance from the user's current viewing position and is located in front of the virtual screen.
- Embodiments of the present disclosure also provide a display processing device based on the virtual reality space.
- the device includes: a display processing module, configured to respond to the user's operation of entering the virtual reality space, within the virtual screen in the virtual reality space.
- Present virtual reality video information and display multiple layers respectively in multiple spatial location areas in the virtual reality space; wherein each of the spatial location areas has a different display distance from the user's current viewing position, and is located in the In front of the virtual screen.
- An embodiment of the present disclosure also provides an electronic device.
- the electronic device includes: a processor; a memory used to store instructions executable by the processor; and the processor is used to read the instruction from the memory.
- the instructions can be executed and executed to implement the display processing method based on the virtual reality space as provided by the embodiments of the present disclosure.
- Embodiments of the present disclosure also provide a non-volatile computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the display processing method based on the virtual reality space as provided by the embodiment of the present disclosure. , model display method based on virtual reality space, or human-computer interaction method.
- An embodiment of the present disclosure also provides a computer program, including:
- Instructions which when executed by a processor cause the processor to execute a virtual reality space-based display processing method, a virtual reality space-based model display method, or a human-computer interaction method according to an embodiment of the present disclosure.
- Embodiments of the present disclosure provide a model display method based on virtual reality space.
- the method includes: in response to a gift display instruction, generating a target gift model corresponding to the gift display instruction; Among the multiple layers, determine the target display layer corresponding to the target gift model, wherein the spatial location area corresponding to each layer is different from the display distance of the user's current viewing position; in the target display layer corresponding to The target gift model is displayed on the spatial location area.
- Embodiments of the present disclosure also provide a model display device based on virtual reality space.
- the device includes: a generation module, configured to respond to a gift display instruction and generate a target gift model corresponding to the gift display instruction; Determining module, configured to determine the target display layer corresponding to the target gift model among the multiple layers corresponding to the currently played virtual reality video, wherein the spatial location area corresponding to each layer is consistent with the user's current viewing position.
- the display distances are different; a display module is used to display the target gift model on the target space location area corresponding to the target display layer.
- An embodiment of the present disclosure also provides an electronic device.
- the electronic device includes: a processor; a memory used to store instructions executable by the processor; and the processor is used to read the instruction from the memory.
- the instructions can be executed, and the instructions are executed to implement the model display method based on the virtual reality space as provided by the embodiments of the present disclosure.
- Embodiments of the present disclosure also provide a non-volatile computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the model display method based on virtual reality space as provided by the embodiment of the present disclosure. .
- embodiments of the present disclosure provide a human-computer interaction method applied to XR equipment.
- the method includes:
- the corresponding close gift panel is presented in the virtual space
- inventions of the present disclosure provide a human-computer interaction device configured in XR equipment.
- the device includes:
- a gift panel presentation module configured to respond to the triggering operation of the gift entrance in the virtual space and present the corresponding close gift panel in the virtual space;
- a gift giving module configured to respond to a hand model in the virtual space facing a giving operation of any virtual gift in the close gift panel, and present a special effect of the virtual gift in the virtual space.
- an electronic device which includes:
- a processor and a memory The memory is used to store a computer program.
- the processor is used to call and run the computer program stored in the memory to execute the human-computer interaction method provided in the first aspect of the disclosure.
- embodiments of the disclosure provide a non-volatile computer-readable storage medium for storing a computer program, the computer program causing the computer to execute the human-computer interaction method as provided in the first aspect of the disclosure.
- an embodiment of the present disclosure provides a computer program product, including a computer program/instructions, which causes a computer to execute the human-computer interaction method as provided in the first aspect of the present disclosure.
- Figure 1 is a schematic diagram of an application scenario of a virtual reality device provided by an embodiment of the present disclosure
- Figure 2 is a schematic flowchart of a display processing method based on virtual reality space provided by an embodiment of the present disclosure
- Figure 3 is a schematic diagram of a display scene based on virtual reality space provided by an embodiment of the present disclosure
- Figure 4 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 5 is a schematic flowchart of another display processing method based on virtual reality space provided by an embodiment of the present disclosure
- Figure 6 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 7 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 8 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 9 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 10A is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 10B is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 11 is a schematic diagram of another display scene based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 12 is a schematic structural diagram of a display processing device based on virtual reality space provided by an embodiment of the present disclosure
- FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- Figure 14 is a schematic flowchart of a model display method based on virtual reality space provided by an embodiment of the present disclosure
- Figure 15 is a schematic diagram of a model display scene based on virtual reality space provided by an embodiment of the present disclosure
- Figure 16 is a schematic flowchart of another model display method based on virtual reality space provided by an embodiment of the present disclosure.
- Figure 17 is a schematic diagram of a hierarchical structure between multiple layers provided by an embodiment of the present disclosure.
- Figure 18 is a schematic structural diagram of a model display device based on virtual reality space provided by an embodiment of the present disclosure
- Figure 19 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure.
- Figure 20 is a schematic diagram of presenting a close gift panel in a virtual space provided by an embodiment of the present disclosure
- Figure 21(A), Figure 21(B) and Figure 21(C) are respectively different exemplary schematic diagrams of turning pages of a close gift panel in a virtual space provided by embodiments of the present disclosure
- Figure 22 is a schematic diagram of presenting a close gift panel in the virtual space by triggering the remote gift entrance provided by an embodiment of the present disclosure
- Figure 23 is a schematic diagram of presenting a close gift panel in the virtual space by triggering the close gift entrance provided by an embodiment of the present disclosure
- Figure 24 is a flow chart of a method for presenting a close gift panel in a virtual space according to an embodiment of the present disclosure
- Figure 25 is a schematic diagram of presenting gift giving guidance information under an experience model provided by an embodiment of the present disclosure.
- Figure 26 is a flow chart of a method for giving any virtual gift in a virtual space through a hand model according to an embodiment of the present disclosure
- Figure 27 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure.
- Figure 28 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
- the term “include” and its variations are open-ended, ie, “including but not limited to.”
- the term “based on” means “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- the present disclosure provides a display processing method, device, equipment and medium based on virtual reality space, which realizes video correlation between the virtual screen and the user's current viewing position.
- the structured display of layers makes full use of the depth information in the virtual reality space through the hierarchical setting of the distance between the layer and the user's current viewing position, achieving a three-dimensional display effect and helping to improve the user's viewing experience.
- Virtual reality equipment a terminal that realizes virtual reality effects, can usually be provided in the form of glasses, helmet-mounted displays (HMD), and contact lenses to achieve visual perception and other forms of perception.
- HMD helmet-mounted displays
- contact lenses to achieve visual perception and other forms of perception.
- virtual reality equipment realizes The form is not limited to this, and can be further miniaturized or enlarged as needed.
- PCVR Computer-side virtual reality
- the external computer-side virtual reality equipment uses the data output from the PC side to achieve virtual reality effects.
- Mobile virtual reality equipment supports setting up a mobile terminal (such as a smartphone) in various ways (such as a head-mounted display with a special card slot), and through a wired or wireless connection with the mobile terminal, the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality video information through an APP on a mobile terminal.
- a mobile terminal such as a smartphone
- the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality video information through an APP on a mobile terminal.
- the all-in-one virtual reality device has a processor for performing calculations related to virtual functions, so it has independent virtual reality input and output functions. It does not need to be connected to a PC or mobile terminal, and has a high degree of freedom in use.
- Virtual reality objects objects that interact in a virtual scene, are controlled by users or robot programs (for example, robot programs based on artificial intelligence), and can be still, move, and perform various behaviors in the virtual scene, such as in live broadcast scenarios.
- robot programs for example, robot programs based on artificial intelligence
- the virtual person corresponding to the user.
- HMDs are relatively lightweight, ergonomically comfortable, and provide high-resolution content with low latency.
- the virtual reality device is equipped with a posture detection sensor (such as a nine-axis sensor), which is used to detect posture changes of the virtual reality device in real time. If the user wears the virtual reality device, then when the user’s head When the posture changes, the real-time posture of the head will be transmitted to the processor to calculate the gaze point of the user's line of sight in the virtual environment. Based on the gaze point, the user's gaze range (i.e., virtual field of view) in the three-dimensional model of the virtual environment is calculated. ) image and display it on the display, giving people an immersive experience as if they were watching in a real environment.
- a posture detection sensor such as a nine-axis sensor
- the HMD device when the user wears the HMD device and opens a predetermined application, such as a live video application, the HMD device will run a corresponding virtual scene.
- the virtual scene can be a simulation environment of the real world, or it can be A half-simulated, half-fictional virtual scene can also be a purely fictitious virtual scene.
- the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene.
- the embodiments of the present disclosure do not limit the dimensions of the virtual scene.
- the virtual scene can include people, sky, land, ocean, etc.
- the land can include environmental elements such as deserts and cities.
- Users can control virtual objects to move in the virtual scene, and can also control devices through handle devices, bare hands, etc. Use gestures and other methods to interactively control controls, models, display content, characters, etc. in the virtual scene.
- embodiments of the present disclosure provide a display processing method based on the virtual reality space. This method is introduced below with reference to the embodiment.
- Figure 2 is a schematic flowchart of a display processing method based on virtual reality space provided by an embodiment of the present disclosure.
- the method can be executed by a display processing device based on virtual reality space, where the device can be implemented using software and/or hardware.
- the method includes the following steps:
- Step S201 In response to the user's operation of entering the virtual reality space, virtual reality video information is presented on the virtual screen in the virtual reality space; and, multiple layers are displayed in multiple spatial location areas in the virtual reality space;
- Each spatial location area has a different display distance from the user's current viewing position and is located in front of the virtual screen.
- the display processing solution based on the virtual reality space presents virtual reality video information on the virtual screen in the virtual reality space in response to the user's operation of entering the virtual reality space, and multiple The spatial location areas display multiple layers respectively, wherein each spatial location area is at a different display distance from the user's current viewing position and is located in front of the virtual screen.
- the structured display of video-related layers between the virtual screen and the user's current viewing position is realized.
- the depth information in the virtual reality space is fully utilized. A three-dimensional display effect is achieved, which helps improve the user's viewing experience.
- various information is displayed on the video interface. For example, barrage information, gift information, etc. may be displayed.
- depth information in the virtual reality space is fully utilized, and different information is split and displayed in different layers according to scene requirements to achieve a hierarchical display effect.
- the user's operation of entering the virtual reality space is obtained, where the operation may be detecting the user's switch operation of turning on the virtual display device, etc., and further, in response to the user's operation of entering the virtual reality space, in the virtual reality space
- Virtual reality video information is presented in a virtual screen, where the virtual screen corresponds to a canvas pre-built in the virtual scene to display relevant videos, and virtual reality video information is presented in the virtual screen, for example, live video or live video is presented in the virtual screen. Online concert videos, etc.
- multiple layers are displayed in multiple spatial location areas in the virtual reality space, where the spatial location area corresponding to each layer is at a different display distance from the user's current viewing position and is located in front of the virtual screen, That is, multiple layers are displayed staggered between the user's current viewing position and the virtual screen to display different layer information, visually presenting a three-dimensional sense of layer information display.
- using this hierarchical structure of layers can Display some layer information that requires high user attention in the scene in a spatial location area that is closer to the display distance, and display some other layer information that does not require high user attention in a space that is farther from the display distance.
- the depth information in the virtual reality space is fully utilized to achieve "enhanced display” and "weakened display” of layer messages in the display dimension, improving the user's viewing experience.
- the vertical coordinate values on the vertical axis between the various spatial location areas corresponding to the above-mentioned multiple layers are different to achieve the user's visual "far and near” staggered display.
- the coordinate values corresponding to the horizontal coordinate values of each spatial location area may not be exactly the same (that is, the front and rear layers contain layer parts that do not overlap in the X-axis direction), that is, there are many "staggered" arrangements in the horizontal direction.
- the layers, or the coordinate values corresponding to the vertical coordinate values of each spatial location area may not be exactly the same (that is, the front and rear layers contain layer parts that do not overlap in the Y-axis direction), that is, in the vertical Arrange multiple layers “staggered” in the direction, or you can also arrange multiple layers “staggered” in the horizontal and vertical directions at the same time, etc.
- the application scenario is the playback scenario of a live video stream
- the spatial position display areas of the corresponding layers in front of the currently played virtual video frame
- the distance between different layers is The display distance of the current viewing position is different.
- the layer with a closer display distance is obviously closer to the user's eyes, and the user is more likely to notice.
- the interface type layer is set closest to the user's eyes, which obviously makes the user's operations more convenient.
- the spatial location areas corresponding to multiple layers In order to avoid front and rear occlusion between layers, the spatial location areas corresponding to multiple layers The horizontal coordinate values are not exactly the same, and the vertical coordinate values of the spatial location areas corresponding to multiple layers are not exactly the same. Therefore, as shown in Figure 4, the viewing user can see multiple layers in front of the virtual screen. layer message on.
- this hierarchical structure between layers only has different attention sensitivities to different layers for the viewing user.
- the viewing user cannot intuitively see the difference.
- the hierarchical structure relationship between the target display positions of the layers can only feel the distance of the content between different layers. Since users tend to focus on information closer to themselves, therefore, the layer closest to the user is improved. attention to information.
- multiple layers are displayed in multiple spatial location areas in the virtual reality space, including :
- Step 501 Obtain the layer types of multiple layers corresponding to the virtual reality video information.
- virtual reality video information is a video stream displayed in a virtual scene in a virtual reality space, including but not limited to live video streams, concert video streams, etc.
- the layer types of multiple layers corresponding to the virtual reality video information are obtained.
- the layer types can be calibrated according to the scene, including but not limited to the operation user interface layer, the information flow display layer, and the gift layer. Display layers, expression display layers, etc., wherein each layer type may include at least one sub-layer, etc., for example, the operation user interface layer may include a control panel layer, etc., which are not mentioned here. List them one by one.
- Step 502 Determine multiple spatial location areas of multiple layers in the virtual reality space according to the layer type.
- multiple spatial location areas of multiple layers in the virtual reality space are determined according to the layer type.
- Each spatial location area has a different display distance from the user's current viewing position. Therefore, visually different layers Types vary in distance from the user. Usually users notice the content on the layer that is closer to them first. Therefore, according to the needs of the scene, the hierarchical structure between the above layers can be used to display unused information to improve the user's viewing experience. .
- the display vector of the layer can be adjusted according to the change information of the user's sight direction, so that the relevant information on the layer is always displayed to the user.
- the user's sight change information includes the change direction and change angle of the user's perspective, etc.
- Step 503 Display the layer message in the corresponding layer according to the spatial location area.
- the layer messages are displayed, that is, during the actual display process, as shown in Figure 7, the layer messages on different layers are displayed according to the spatial position area, that is, the layer messages are displayed at the spatial position of the corresponding layer. It suffices on the area, and there is no need to render the corresponding layers (the figure only shows the display of layer messages of the two layers). This reduces occlusion between layers and improves the viewing experience.
- the display positions of layer messages of different layers can be "staggered", or the spatial location areas of different layers can be “staggered” as much as possible. “Stagger” to reduce the overlap between the front and rear layers, etc.
- the first layer message of the first type in response to obtaining the first layer message of the preset first type, may be considered as a message that can be displayed in real time, such as a "gift" sent to the user. message", etc., after obtaining the first layer message, determine the first layer corresponding to the first layer message.
- the first display level of the first layer message can be determined according to the evaluation index set by the scene, the second display level of each layer that can display the preset first type message can be obtained, and the first display level matching the first display level can be determined.
- the layer corresponding to the second level is the first layer, etc.
- other methods can also be used to determine the first layer, which are not listed here.
- the first layer message is displayed in the spatial position area corresponding to the first layer, thereby fully utilizing the depth information in the virtual reality space to determine the display positions of different layer messages.
- the display position of the first layer message in the spatial location area corresponding to the first layer may be random, or may be set according to relevant scene needs, etc.
- the second type of second layer message may be a layer message that cannot be displayed in real time, for example, in a live broadcast scenario. Since multiple users may send gift messages at the same time, in order to enhance the atmosphere of the live broadcast room and allow users to intuitively see the second layer messages they send, gift messages are usually displayed in the form of a message queue. This gift message can be regarded as a second type of second layer message.
- the second layer message is added to the layer message queue of the second layer corresponding to the second layer message, wherein the layer messages in the layer queue are in the first layer in the order in the message queue.
- the spatial location area of the second layer is displayed.
- the layer messages in the message queue can be further divided into layer message subtypes.
- the display sub-area corresponding to each layer message subtype is determined in the spatial location area of the second layer, so that when displaying the layer message in the layer message queue of the second layer, according to the image of the layer message Layer message subtype, the corresponding layer message is displayed in the corresponding display sub-area in the spatial position area of the second layer. That is, layer messages under the same layer type can share a spatial location area to avoid a complex hierarchical structure that affects the user's viewing experience.
- the gift message can be divided into a half-screen gift message subtype and a full-screen message subtype, where, The display sub-area corresponding to the half-screen gift message subtype is the left half of X, and the display sub-area corresponding to the full-screen gift message subtype is the entirety of
- the display position of a is determined on the left half of X.
- the display position of b is determined on X.
- the display processing method based on the virtual reality space presents virtual reality video information on the virtual screen in the virtual reality space in response to the user's operation of entering the virtual reality space, and multiple objects in the virtual reality space
- Each spatial location area displays multiple layers respectively, wherein each spatial location area has a different display distance from the user's current viewing position and is located in front of the virtual screen.
- the structured display of video-related layers between the virtual screen and the user's current viewing position is realized.
- the depth information in the virtual reality space is fully utilized. A three-dimensional display effect is achieved, which helps improve the user's viewing experience.
- the layer priority of each layer can also be determined based on the needs of the scene, and the spatial location areas of different layers can be determined based on the priority. Indicators such as the importance of the message will divide the layer messages into different layers for display based on the degree to which the user's attention is required, further improving the user's viewing experience.
- determining multiple spatial location areas of multiple layers in the virtual reality space according to the layer type includes: determining priority information of multiple layers according to the multiple layer types, and further, according to the priority The information determines multiple spatial location regions of multiple layers in the virtual reality space.
- the higher the priority information the more the layer messages displayed in the corresponding layer need to attract the user's attention. Therefore, the distance between the corresponding target space area and the user's viewing position is closer.
- the priority information corresponding to the layer type can be calibrated according to the needs of the scene. The same layer type may have different priority information in different scenes.
- the spatial location area of the layer type with a higher priority is closer to the viewing user. Therefore, since the control layer is closest to the current viewing user, it is convenient for the user to perform interactive operations, and the main state is launched. The layer is also closer to the current viewing user. Therefore, it is easier for the current viewing user to notice the relevant information sent by him first.
- the spatial position area of the corresponding layer is also relatively fixedly set, and the corresponding information of different priorities is stored in the preset database. spatial location area, thereby querying the preset database according to the priority information to obtain multiple spatial location areas corresponding to multiple layers.
- the video display position of the virtual reality video information in the virtual reality space is determined.
- the video display position can be based on the building
- the canvas position of the good display video that is, the position of the virtual screen
- the preset distance interval and priority information in order from low to high in the direction closer to the user's current viewing position, one by one Determine the spatial location area for each layer to determine multiple spatial location areas for multiple layers.
- the preset distance interval can be calibrated based on experimental data.
- the preset distance interval is J1
- the multiple layers are arranged in order from high to low according to the priority information
- the order is L1, L2, L3, L4 respectively.
- the area is located at J1 before P1 and P2, in the direction close to the user's current viewing position, to determine the spatial position of L2
- the location area is located at P3 at J1 before P2, and in the direction close to the user's current viewing position, it is determined that the spatial location area of L1 is at P4 at J1 before P3.
- the target layer corresponding to the highest priority is determined according to the priority information, and the target layer is determined In the spatial location area in the virtual reality space, the method of determining the spatial location area of the target layer in the virtual reality space is different in different application scenarios. In some possible implementations, the distance can be determined in the direction of the user's line of sight.
- the preset distance threshold from the user's current viewing position is the spatial location area; in some possible implementations, the total distance between the user's current viewing position and the video display position of the virtual reality video information in the virtual reality space can be determined to The video display position is the starting point, and the preset proportion threshold that determines the total distance is the spatial location area. This prevents the displayed layer from being too close to the viewing user, resulting in limited information within the user's viewing angle.
- the spatial location area of the target layer After determining the spatial location area of the target layer, taking the spatial location area as the starting point, and in order from high to low according to the preset distance interval and priority information, determine each other one by one in the direction away from the user's current viewing position.
- the spatial location area of a layer to determine multiple spatial location areas for multiple layers.
- the preset distance interval may be the same as the preset distance interval in the above embodiment, or may be different, and may be set according to scene requirements.
- the preset distance interval is J2
- the multiple layers are L1, L2, L3, and L4 in descending order of priority information
- the target layer is L1
- determine the user's current viewing position as D then determine the distance D to J3 in the direction away from the user's current viewing position as the spatial location area of L1, and determine the spatial location of L2 in the direction away from the user's current viewing position.
- the area is located at J2 before Z1.
- Z2 in the direction away from the user's current viewing position determines that the spatial location area of L3 is located at J2 before Z2.
- Z3, in the direction away from the user's current viewing position determines that the spatial location area of L4 is in front of Z3.
- the location range corresponding to the spatial location area mentioned in the above embodiment can be determined according to the shape of the spatial location area, etc.
- the spatial location area corresponding to the same layer can be continuous, or it can be as shown in the above figure. 9 is divided into multiple modules.
- each layer is an arc-shaped area (the non-shaded area in the figure), and the circular The arc-shaped area is located within the viewing angle range of the viewing user.
- the spatial position area corresponding to each layer is located on the corresponding arc-shaped area (two layers are shown in the figure).
- the arc-shaped area corresponding to each layer The center position of the area is located in the direction of the user's line of sight corresponding to the user's current viewing position, so that the viewing user has a stronger three-dimensional viewing experience.
- the display processing method based on the virtual reality space is based on the priority information of the layer. information to determine the spatial position area of the layer.
- the spatial position area of different layers is different from the display distance of the user's current viewing position.
- the present disclosure also proposes a display processing device based on virtual reality space.
- Figure 12 is a schematic structural diagram of a display processing device based on virtual reality space provided by an embodiment of the present disclosure.
- the device can be implemented by software and/or hardware, and generally can be integrated in an electronic device to perform display processing based on virtual reality space.
- the device includes: a display processing module 1210, wherein,
- the display processing module 1210 is configured to present virtual reality video information on a virtual screen in the virtual reality space in response to the user's operation of entering the virtual reality space; and to display multiple images in multiple spatial location areas in the virtual reality space. layer;
- Each spatial location area has a different display distance from the user's current viewing position and is located in front of the virtual screen.
- the virtual reality space-based display processing device provided by the embodiments of the present disclosure can execute the virtual reality space-based display processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects of the execution method. Its implementation principles are similar. I won’t go into details here.
- the present disclosure also proposes a computer program product, which includes a computer program/instructions.
- a computer program product which includes a computer program/instructions.
- the display processing method based on the virtual reality space in the above embodiments is implemented.
- FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device 1300 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 13 is only an example and should not bring any limitations to the functions and scope of use of the embodiments of the present disclosure.
- the electronic device 1300 may include a processor (eg, central processing unit, graphics processor, etc.) 1301, which may be loaded into a random access memory according to a program stored in a read-only memory (ROM) 1302 or from a memory 1308 (RAM) 1303 to perform various appropriate actions and processes.
- ROM read-only memory
- RAM memory 1308
- various programs and data required for the operation of the electronic device 1300 are also stored.
- the processor 1301, ROM 1302, and RAM 1303 are connected to each other through a bus 1304.
- Input/output (I/O) interface 1305 is also connected to the bus 1304.
- the following devices may be connected to the I/O interface 1305: input devices 1306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration An output device 1307 such as a computer; a memory 1308 including a magnetic tape, a hard disk, etc.; and a communication device 1309.
- the communication device 1309 may allow the electronic device 1300 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 13 illustrates electronic device 1300 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 1309, or from memory 1308, or from ROM 1302.
- the computer program is executed by the processor 1301, the above functions defined in the virtual reality space-based display processing method of the embodiment of the present disclosure are performed.
- the present disclosure provides a model display method, device, equipment and medium based on virtual reality space, which fully utilizes the depth information of virtual reality space to realize three-dimensional display of gift models.
- the effect is to enhance the user's interactive experience on the basis of realizing gift sending interaction in the virtual reality space.
- Figure 14 is a schematic flowchart of a model display method based on virtual reality space provided by an embodiment of the present disclosure.
- the method can be executed by a model display device based on virtual reality space.
- the device can be implemented using software and/or hardware.
- the method includes the following steps:
- Step 201 In response to the gift display instruction, generate a target gift model corresponding to the gift display instruction.
- the target gift model is a model that can be displayed in the virtual reality space.
- the target gift model includes but is not limited to one or more combinations of text, pictures, animations, etc.
- the target gift model can be in 2D or 3D form. Forms, etc. are not listed one by one here.
- the user can trigger the input of gift display instructions through preset buttons on the control device (such as a handle device, etc.); there can also be many other ways for the user to input gift display instructions, compared to using physical device buttons.
- This method proposes VR without the need for physical device buttons.
- the control improvement plan can improve the technical problem that the buttons of the physical device are easily damaged, which in turn easily affects the user's control.
- the image information captured by the camera on the user can be monitored, and then based on the user's hand or the user's handheld device (such as a handle) in the image information, it can be determined whether it conforms to the display interactive component model (component model for interaction, the interactive component model is each pre-set (Bound with interactive function events)), if it is determined that the preset conditions for displaying the interactive component model are met, at least one interactive component model is displayed in the virtual reality space, and finally by identifying the action information of the user's hand or the user's handheld device , execute the interactive function events pre-bound by the interactive component model selected by the user.
- the display interactive component model component model for interaction, the interactive component model is each pre-set (Bound with interactive function events)
- a camera can be used to capture an image of the user's hand or the user's handheld device, and based on image recognition technology, the user's hand gesture or the position change of the handheld device in the image can be judged. If it is determined that the user's hand or the user's handheld device has been raised a certain amount, The amplitude is such that the user's virtual hand or virtual handheld device mapped in the virtual reality space enters the user's current perspective range, and the display interactive component model can be evoked in the virtual reality space. Based on image recognition technology, the user lifts the handheld device to call up an interactive component model in the form of a floating ball. Each floating ball represents a control function, and the user can interact based on the floating ball function. For example, floating balls 1, 2, 3, 4, and 5 can correspond to: "gift 1 display”, “gift 2 display”, “gift 3 display”, “more gifts", “cancel” and other interactive component models.
- the position of the user's hand or the user's handheld device is identified and mapped into the virtual reality space to determine the corresponding click.
- the spatial position of the sign If the spatial position of the click sign matches the spatial position of the target interactive component model among the displayed interactive component models, then the target interactive component model is determined to be the interactive component model selected by the user; finally, the target interactive component is executed Model pre-bound interaction function events.
- the user can raise the handle of the left hand to evoke the interactive component model displayed in the form of a suspended ball, and then select and click on the interactive component by moving the handle of the right hand.
- the position of the right hand handle is identified and mapped to the virtual reality space to determine the spatial position of the corresponding click mark (for example, a ray trajectory model can be mapped in the virtual reality space, based on the ray trajectory The trajectory end position of the model indicates the spatial position of the click mark).
- a model containing a corresponding gift display control can also be displayed in the virtual reality space, When the gift display control model is triggered, the gift display instruction is obtained.
- the way in which the gift display control model is triggered can be achieved by moving the handle position of the right hand to select and click. Refer to the above embodiment.
- the gift display control model may be displayed in the spatial location area of the layer closest to the current user's viewing position.
- a target gift model corresponding to the gift display instruction is generated.
- the gift model rendering information corresponding to the gift display instruction can be determined.
- the gift model rendering information is used to render the gift image or animation in the gift model, and then obtain the operation object information corresponding to the gift display instruction.
- the operation Object information includes but is not limited to user nicknames, user avatars, etc. obtained with user authorization.
- a target gift model corresponding to the gift display instruction is generated.
- the operation can be intuitively learned based on the target gift model.
- a gift panel containing multiple candidate gift models can be displayed, and the candidate gift model selected by the user in the gift panel is obtained as the target gift.
- model wherein the method of selecting the target gift model can be achieved by moving the right handle position to select and click, etc.
- Step 202 Determine the target display layer corresponding to the target gift model among the multiple layers corresponding to the currently played virtual reality video, where the spatial location area corresponding to each layer is different from the display distance of the user's current viewing position.
- the spatial location area corresponding to each layer is different from the display distance of the user's current viewing position. Therefore, different levels of layer information can be implemented.
- different layer types are at different distances from the user. Usually, the user first notices the content on the layer that is closer to it. Therefore, the hierarchical structure between the above layers can be determined according to the needs of the scene.
- the target display layer corresponding to the target gift model is determined according to the needs of the scene.
- first priority information of the target gift model is identified, second priority information of each layer in the plurality of layers is determined, and second priority information corresponding to the first priority information is determined.
- Layer displays the layer for the target.
- the first priority information of the target gift model can be determined based on the preset value information of the target gift model on the corresponding platform, etc., and each layer is set according to the display distance from the current user.
- the second priority level information determines that the layer corresponding to the second priority level information that matches the first limited level information is the target display layer.
- the depth information in the virtual reality space is fully utilized to determine the distance from the current user viewing distance display based on the priority of the target gift model in the scene.
- the gift type of the target gift model is identified, and the layer matching the gift type is determined as the target display layer among multiple layers.
- the displayable gift type corresponding to each layer that can display the gift model in multiple layers can be preset. After the gift type of the target gift model is identified, the displayable gift type corresponding to the layer is Determine the target display layer. In some embodiments, the depth information in the virtual reality space is fully utilized to determine the distance from the current user viewing distance display for the gift type of the target gift model. For example, when a full-screen gift layer is included in multiple layers, if the gift type of the target gift model is a full-screen gift, the corresponding target gift model will be displayed in the full-screen gift layer.
- Step 203 Display the target gift model on the target space location area corresponding to the target display layer.
- the model display scheme based on the virtual reality space responds to the gift display instruction, generates the target gift model corresponding to the gift display instruction, and determines the target gift model in the multiple layers corresponding to the currently played virtual reality video.
- the target display layer corresponding to the gift model where the spatial location area corresponding to each layer is different from the display distance of the user's current viewing position, and then the target gift model is displayed on the target spatial location area corresponding to the target display layer.
- the target gift model is displayed on the target space location area corresponding to the target display layer to realize gift sending interaction in the virtual display control and simulate gifts in the real world. In addition to sending interactions, it also makes full use of the depth information in the virtual reality space to enhance the interactive experience.
- the method of displaying the target gift model on the target spatial location area corresponding to the target display layer includes, but is not limited to, at least one of the methods mentioned in the following embodiments:
- the display path of the target gift model is determined, and the target gift model is controlled to be displayed on the target space location area according to the display path, where the display path is located within the target space location area, and the display path can be calibrated according to scene needs.
- the target gift model when the target gift model is as shown in the figure, it is displayed in the corresponding target space.
- the display area slide into the display from the right to the left, and stop moving when the length of the sliding path is equal to the preset length threshold, and the display is fixed at the corresponding position.
- the display duration of the target gift model is determined, where the display duration can be preset or determined according to the rules set by the scene. For example, when the rule set by the scene is based on the currently displayed target gift model The quantity determines the display duration. The greater the number of target gifts, the longer the corresponding display duration will be in order to enhance the atmosphere of playback.
- the target gift model is controlled to be displayed on the target space location area according to the display duration, and after the display duration reaches the preset display duration, the target gift model is stopped from being displayed.
- the model display method based on the virtual reality space of the embodiment of the present disclosure responds to the gift display instruction, generates the target gift model corresponding to the gift display instruction, and determines among the multiple layers corresponding to the currently played virtual reality video.
- Step 401 Obtain the layer types of multiple layers corresponding to the currently played virtual reality video.
- Step 402 Determine multiple target space location areas of multiple layers in the virtual reality space according to the layer type, where the display distances of each target space location area and the user's current viewing position are different.
- the layer types of multiple layers corresponding to the currently played virtual reality video are obtained.
- the layer types can be calibrated according to the scene, including but not limited to the operation user interface layer and the information flow display layer. , gift display layer, expression display layer, etc., wherein each layer type can include at least one sub-layer, etc., for example, the operation user interface layer can include a control board layer, etc., in This will not be listed one by one.
- multiple spatial location areas of multiple layers in the virtual reality space are determined according to the layer type.
- Each spatial location area has a different display distance from the user's current viewing position. Therefore, different layer types are visually distant from the user's current viewing position. Different distances, usually users notice the content on the layer that is closer to them first, so you can If the scene requires it, use the hierarchical structure between the above layers to display unused information and improve the user's viewing experience.
- the vertical coordinate values on the vertical axis between the various spatial location areas corresponding to the above-mentioned multiple layers are different to achieve the user's visual "far and near” staggered display.
- the coordinate values corresponding to the horizontal coordinate values of each spatial location area may not be exactly the same (that is, the front and rear layers contain layer parts that do not overlap in the X-axis direction), that is, there are many "staggered" arrangements in the horizontal direction.
- the layers, or the coordinate values corresponding to the vertical coordinate values of each spatial location area may not be exactly the same (that is, the front and rear layers contain layer parts that do not overlap in the Y-axis direction), that is, in the vertical Arrange multiple layers “staggered” in the direction, or you can also arrange multiple layers “staggered” in the horizontal and vertical directions at the same time, etc.
- the distance between the target space display area corresponding to the target display layer displaying the target gift model and the user's human eyes is adapted to the degree to which the target gift model needs to attract the user's attention.
- this hierarchical structure between layers only has different attention sensitivities to different layers for the viewing user.
- the viewing user cannot intuitively see the spatial location areas of different layers. Due to the hierarchical structure relationship between different layers, one can only feel the distance of content between different layers. Since users tend to focus on information closer to themselves, this increases attention to the information on the layer closest to the user.
- the layer priority of each layer can also be determined based on the needs of the scene, and the spatial location areas of different layers can be determined based on the priority. According to the importance of the target gift model gender and other indicators, the target gift model is divided into adapted target display layers for display, further improving the user's viewing experience.
- determining multiple spatial location areas of multiple layers in the virtual reality space according to the layer types includes: determining multiple second priority information of the multiple layers according to the multiple layer types, and then , determining multiple spatial location areas of multiple layers in the virtual reality space based on multiple second priority information.
- the layer type corresponds to the The priority information can be calibrated according to the needs of the scene.
- the same layer type may have different priority information in different scenes.
- the video display position of the virtual reality video in the virtual reality space is determined.
- the video display position can be set according to the The position of the canvas displaying the video is determined, and then, starting from the video display position, the position of each layer is determined one by one according to the preset distance interval and the second priority information in order from low to high in the direction closer to the user's current viewing position.
- the preset distance interval can be calibrated based on experimental data.
- the target layer corresponding to the highest priority is determined according to the second priority information, and the target is determined.
- the spatial location area of the layer in the virtual reality space is different in different application scenarios;
- the distance in the direction of the user's line of sight from the user's current viewing position at a preset distance threshold is the spatial location area; in some possible implementations, it can be determined that the user's current viewing position and the virtual reality video are in the virtual The total distance between the video display positions in the real space, taking the video display position as the starting point, determines the preset proportion threshold of the total distance as the spatial position area, thereby preventing the displayed layer from being too close to the viewing user, making the user's The information viewed within the viewing angle is limited.
- the spatial location area of the target layer After determining the spatial location area of the target layer, starting from the spatial location area, determine other locations one by one in the direction away from the user's current viewing position based on the preset distance interval and the second priority information in order from high to low.
- the spatial location area of each layer to determine multiple spatial location areas for multiple layers.
- the preset distance interval may be the same as the preset distance interval in the above embodiment, or may be different, and may be set according to scene requirements.
- FIG. 11 another schematic diagram of a hierarchical structure between multiple layers provided by an embodiment of the present disclosure.
- Each layer is an arc-shaped area (the non-shaded area in the figure).
- the arc-shaped area is located within the viewing angle of the user.
- the spatial location area corresponding to each layer is located on the corresponding arc-shaped area ( Two layers are shown in the figure), and the center position of the arc-shaped area corresponding to each layer is located in the direction of the user's line of sight corresponding to the user's current viewing position.
- the relationship between each layer and the user's current viewing position may be determined based on multiple layer types.
- the corresponding display distance determine the display distance corresponding to each layer and the user's current viewing position based on the second priority information corresponding to the layer type. The higher the second priority, the display distance corresponding to the corresponding layer and the user's viewing position. The smaller the distance, the display distance can be determined by referring to the determination method of the spatial location area in the above embodiment.
- the user's current viewing position is the center of the circle, and the corresponding display distance is the radius, extending in the direction of the user's line of sight.
- the arc-shaped area of each layer where the range corresponding to the arc-shaped area is related to the user's field of view, and the arc-shaped area is determined to be the spatial location area of the corresponding layer.
- the target display layer is the layer that is most suitable for the target gift model in the corresponding scenario.
- the corresponding layer types include: operation user interface layer (including control panel layer), information flow layer (including public screen layer (used to display comment information, etc.)), gift display layer (including Tray layer (used to display tray gifts), host-guest gift layer (the picture shows a half-screen gift layer and a full-screen gift layer, used to display full-screen gifts or half-screen gifts sent by any user watching the current live video etc.), guest gift layer (used to display flying gifts sent by other viewing users, etc.), including main state emission layer (used to display related gifts sent by the current viewing user, etc.), expression display layer (including main state Emission layer (used to display related expressions emitted by the current viewing user, etc.)), guest expression layer type (used to display expressions sent by other viewing users, etc.
- the target display layer is the layer corresponding to the tray layer type.
- the target display layer is the picture corresponding to the gift layer type. layer etc.
- the virtual reality space-based model display method of the embodiment of the present disclosure sets multiple layers corresponding to the virtual reality video, and the spatial location areas of different layers have different display distances from the user's current viewing position. , through structured layer settings, it meets the hierarchical display of different layer information, makes full use of the depth information in the virtual reality space, and achieves a three-dimensional display effect of the target gift model.
- the present disclosure also proposes a model display device in a virtual reality space.
- Figure 18 is a schematic structural diagram of a model display device based on virtual reality space provided by an embodiment of the present disclosure.
- the device can be implemented by software and/or hardware, and can generally be integrated in an electronic device to display models based on virtual reality space.
- the device includes: a generation module 810, a determination module 820, and a display module 830. in,
- Generating module 810 configured to respond to the gift display instruction and generate a target gift model corresponding to the gift display instruction
- the determination module 820 is used to determine the target display layer corresponding to the target gift model among the multiple layers corresponding to the currently played virtual reality video, wherein the spatial location area corresponding to each layer is the display of the user's current viewing position. Distances vary;
- the display module 830 is used to display the target gift model on the target space location area corresponding to the target display layer.
- the virtual reality space-based model display device provided by the embodiments of the present disclosure can execute the virtual reality space-based model display method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects of the execution method. Its implementation principles are similar. I won’t go into details here.
- the present disclosure also proposes a computer program product, which includes a computer program/instructions.
- a computer program/instructions When the computer program/instructions are executed by a processor, the virtual reality space-based model display method in the above embodiments is implemented.
- FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device 1300 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 9 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
- electronic device 1300 may include a processor (eg, central processing unit, graphics processor, etc.) 1301 that may be loaded into random access memory according to a program stored in read-only memory (ROM) 1302 or from memory 908 (RAM) 1303 to perform various appropriate actions and processes.
- ROM read-only memory
- RAM memory 908
- various programs and data required for the operation of the electronic device 1300 are also stored.
- the processor 1301, ROM 1302 and RAM 1303 are connected to each other through a bus 1304.
- An input/output (I/O) interface 1305 is also connected to bus 1304.
- the following devices may be connected to the I/O interface 1305: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration Output device 907 such as a device; memory 1308 including, for example, magnetic tape, hard disk, etc.; and communication Device 1309.
- the communication device 1309 may allow the electronic device 1300 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 13 illustrates electronic device 1300 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 1309, or from memory 1308, or from ROM 1302.
- the computer program is executed by the processor 1301, the above functions defined in the virtual reality space-based model display method of the embodiment of the present disclosure are performed.
- the style of presenting virtual gifts in virtual live broadcast scenarios is relatively simple.
- the audience selects a virtual gift in the gift panel by emitting cursor rays, and touches the Trigger button on the handle to send the virtual gift to the host, which lacks certain interaction.
- the Trigger button on the handle to send the virtual gift to the host, which lacks certain interaction.
- the present disclosure provides a human-computer interaction method, device, equipment and storage medium, which enables users' diversified interactions with virtual gifts in a virtual space, enhances the fun of gift interaction in the virtual space, and mobilizes enthusiasm for live broadcasts in the virtual space.
- the inventive concept of the present disclosure is to: in response to the triggering operation of the gift entrance in the virtual space, present a gift next to the user in the virtual space A close gift panel to support users to perform related gifting operations on any virtual gift in the close gift panel through the hand model. Therefore, in response to the hand model's operation for giving any virtual gift, the special effect of the virtual gift can be presented in the virtual space, thereby increasing the user's interactive operations when giving virtual gifts in the virtual space, and realizing the virtual gift-giving effect in the virtual space.
- the diverse interactions of gifts enhance the interactive fun and user interaction atmosphere of virtual gifts in the virtual space.
- Figure 19 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure. This method can be applied to XR equipment, but is not limited thereto. This method can be executed by the human-computer interaction device provided by the present disclosure, wherein the human-computer interaction device can be implemented by any software and/or hardware.
- the human-computer interaction device can be configured in electronic equipment capable of simulating virtual scenes such as AR/VR/MR. This disclosure does not place any restrictions on the specific type of electronic equipment.
- the method may include the following steps:
- the virtual space can be a corresponding virtual environment simulated by the XR device for a certain live broadcast scene selected by any user, so as to display the corresponding live interactive information in the virtual space.
- the anchor is supported to select a certain type of live broadcast scene to build a corresponding virtual live broadcast environment as the virtual space in this disclosure, so that each audience can enter the virtual space to realize corresponding live broadcast interaction.
- multiple virtual screens such as live broadcast screen, control screen and public screen can be set up in the virtual space for different live broadcast functions to display different live broadcast contents respectively.
- the live video stream of the anchor can be displayed in the live screen so that users can watch the corresponding live screen.
- the control screen can display host information, online audience information, related live broadcast recommendation lists, and current live broadcast resolution options to facilitate users to perform various related live broadcast operations.
- Various user comment information, likes, gifts, etc. in the current live broadcast can be displayed on the public screen to facilitate users to manage the current live broadcast.
- live screen, control screen and public screen are all facing users and displayed at different locations in the virtual space. Furthermore, the position and style of any virtual screen can be adjusted to prevent it from blocking other virtual screens.
- a corresponding gift entrance will be displayed in the virtual space.
- the gift portal will be triggered first, for example, by selecting the gift portal with the handle cursor, or controlling the hand model to click on the gift portal, etc.
- this disclosure will detect the trigger operation of the gift entrance in the virtual space. Presents a gift panel that is close to the user. Moreover, the corresponding hand model will also be displayed in the virtual space to control the hand model to perform corresponding interactive operations on any virtual object in the virtual space.
- the close gift panel is located next to the user, that is, within the reach of the user's hand model, to support the user to control the hand model in the virtual space through real handle operations or gesture operations, facing the close gift panel.
- Any virtual gift can directly perform related gifting operations at a short distance without using cursor rays to perform related gifting operations at a distance.
- the hand model can be used to perform a series of interactive operations such as touching, grabbing, throwing, launching, and returning to the panel for any virtual gift.
- the corresponding near gift panel may be presented at a predetermined location within the virtual reality space. Wherein, the distance between the predetermined position and the user meets the predetermined distance requirement, so that the nearest The body gift panel is located next to the user, that is, within the reach of the user's hand model, to support user operations through real handles or gesture operations.
- the close gift panel in the present disclosure can present the user's periphery horizontally, approximately at a short distance in front of the user, such as 40-45 centimeters, and the horizontal position is approximately flush with the user's elbow.
- UI control when there is a corresponding User Interface (UI) control in the near gift panel, it also supports performing relevant interactive operations on any UI control through the hand model. For example, a series of interactive operations such as touching, clicking, pressing, and lifting can be performed on any UI control through the hand model. Among them, UI controls can be used to perform related panel operations on the close gift panel, such as panel page turning, panel collapsing, etc.
- UI controls can be used to perform related panel operations on the close gift panel, such as panel page turning, panel collapsing, etc.
- the present disclosure can set a corresponding page turning function for the near gift panel.
- the page turning operations supported by the close gift panel can include but are not limited to the following three situations:
- Scenario 1 The hand model faces the dragging and page-turning operation of the close gift panel.
- the present disclosure can select the close gift panel by touching the close gift panel with the hand model.
- the hand model touches the close gift panel, it can control the highlight display of the close gift panel and control the real handle in the XR device to vibrate to remind the user that the close gift panel has been selected.
- the user can control the hand model to perform dragging and page turning on the close gift panel, thereby generating the drag page turning of the close gift panel. operate.
- Scenario 2 The hand model is used to trigger the page turning control in the close gift panel.
- the present disclosure can set corresponding page turning controls in the close gift panel, and the page turning controls can include two types of previous page controls and next page controls.
- the trigger operation of the page turning control can be generated as the page turning operation of the close gift panel.
- Scenario 3 The hand model faces the toggle operation of the rocker component in the handle model.
- the present disclosure In order to ensure users' diversified interactions with any virtual object in the virtual space, in addition to displaying the corresponding hand model in the virtual space, the present disclosure also displays the corresponding handle model. Then, different interactive operations are performed in the virtual space through the different characteristics of the hand model and the handle model. Then, as shown in Figure 21(C), by holding the handle model with the hand model, and then moving the joystick component in the handle model left and right, the page turning operation of the close gift panel can be generated.
- the corresponding close gift panel in response to the triggering operation of the gift entrance in the virtual space, the corresponding close gift panel can be presented in the virtual space.
- the close gift panel is beside the user and supports the user to use the hand model to select the close gift. Any virtual gift in the panel performs related gift operations. Therefore, in response to the hand model's operation for giving any virtual gift, the special effect of the virtual gift can be presented in the virtual space. There is no need to select a virtual gift through the cursor ray and then use the handle Trigger key to present the corresponding gift.
- the present disclosure can detect various motion information of the hand model facing any virtual gift in real time to determine whether the user performs corresponding gifting operations on the virtual gift through the hand model. After detecting the hand model's giving operation for any virtual gift, it means that the user currently needs to give the virtual gift to the host. Therefore, the present disclosure can present the special effect of virtual gift giving in the virtual space.
- the special effect can be the throwing effect of throwing the virtual gift into the virtual space through the hand model, or the virtual gift can be sent to the virtual space through the gift giving props.
- the props in the space produce effects, which is not limited by this disclosure.
- the special effects of giving virtual gifts may include but are not limited to: the spatial throwing trajectory of the virtual gift in the virtual space, and the throwing special effects set based on the spatial throwing trajectory or/and the virtual gift.
- the throwing special effect may be an animation effect displayed at the final landing point after the virtual gift is thrown in the virtual space.
- the virtual space considering that when a user faces the live screen in the virtual space to watch the live video stream of the host, there will usually be corresponding visual areas and blind areas. There may also be some virtual objects in the virtual space, such as control screens, public screens, gift panels, etc., which may also block the user's view of the live video stream. Moreover, gift-giving special effects in the virtual space usually need to be presented between the user and the host's live video stream (that is, the live screen), so that the user can watch the gift-giving special effects, so that the user can judge whether the virtual gift is successfully given. If a virtual gift is given in a virtual space, but the user cannot see the special effects of the virtual gift, the interaction between the user and the host regarding the virtual gift cannot be guaranteed.
- the present disclosure can pre-delimit the location in the virtual space according to the user's location in the virtual space and the location of the live broadcast screen.
- a safe area for gift giving is created to ensure that users can see the corresponding special effects for virtual gifts that fall within the safe area for gift giving after being thrown.
- the live content display area is used to display various live broadcast related contents, such as live video streams, live comments, live broadcast lists, anchor information, etc. That is to say, the live content display area can be the area where the above-mentioned live screen, public screen, control screen and other virtual screens are located.
- the present disclosure can set the live content display area, the gift giving safety area and the close gift panel in the virtual space to be in different spatial planes. Moreover, corresponding distance intervals and area sizes can be set for the live content display area, gift giving safe area and close gift panel respectively. Furthermore, according to the preset distance and area size, the live content display area, the gift giving safe area and the close gift panel are sequentially distributed in the virtual space, so that the live content display area, the gift giving safe area and the close gift panel are within the virtual space There can be relatively independent locations that do not affect each other.
- the present disclosure can face the virtual object according to the hand model.
- Different interactive operations are performed to control the XR device to perform different levels of vibration.
- the XR device such as a real handle
- the XR device can be controlled to perform a slight vibration.
- the virtual object is clicked through the hand model, the XR device can be controlled to perform a larger intensity vibration.
- the virtual objects that interact with the hand model can be a gift entrance, a close gift panel, each virtual gift in the close gift panel, or related user interaction controls, etc.
- the XR device when hovering over the virtual gift with the hand model, the XR device can be controlled to perform a slight vibration. When holding the virtual gift with the hand model, the Control the XR device to perform greater intensity vibrations.
- the technical solution provided by the embodiment of the present disclosure can, in response to the triggering operation of the gift entrance in the virtual space, present the corresponding close gift panel in the virtual space.
- the close gift panel is placed next to the user and supports the user to interact with the gift entrance through the hand model. Perform related gifting operations on any virtual gift in the near gift panel. Therefore, in response to the hand model's operation for giving any virtual gift, the special effect of the virtual gift can be presented in the virtual space. There is no need to select a virtual gift through the cursor ray and then use the handle Trigger key to present the corresponding gift.
- the present disclosure can set a remote gift entrance in the virtual space, and the remote gift entrance can be located at any virtual location in the virtual space.
- the remote gift entrance can be located at a certain location within the public screen.
- cursor rays can be used to trigger interaction at a distance.
- a close gift entrance can also be set in the virtual space.
- the close gift entrance can be at a position close to the user in the virtual space.
- the close gift entrance can be located at a short distance in front of the right side of the user's body, such as About 70 centimeters, and the close gift entrance can be flush with the user's elbow, making it within the reach of the hand model.
- the close gift entrance is closer to the user, and can support triggering interactions at close range through the hand model.
- the process of presenting the near gift panel in the virtual space will be explained in two cases: the remote gift entrance and the near gift entrance.
- the corresponding cursor ray can be sent to the remote gift entrance through the handle model or hand model. Therefore, the cursor ray is used to select the remote gift portal, and then the remote gift portal is confirmed to be selected by touching the Trigger key in the real handle, thereby generating a cursor selection operation of the remote gift portal. Furthermore, in response to the cursor selection operation of the remote gift portal, the corresponding near gift panel is presented in the virtual space. That is, after detecting the cursor selection operation on the remote gift entrance, it means that there is currently an interaction between the user and the host for giving virtual gifts in the virtual space, so a near person beside the user can be presented in the virtual space. Personal gift panel, so that the hand model can perform corresponding gifting operations for any virtual gift in the personal gift panel.
- the present disclosure needs to collapse the presented near gift panel in the virtual space. Therefore, in order to ensure that the near gift panel can be folded conveniently, and the near gift panel is presented by triggering the remote gift entrance, the present disclosure can set a folding control in the near gift panel. As shown in Figure 22, clicking the retract control in the close gift panel through the hand model indicates that the presented close gift panel needs to be retracted from the virtual space.
- the present disclosure will detect in real time the trigger operation performed by the hand model facing the retracting control in the close gift panel.
- the presented close gift panel can be stowed within the virtual space.
- the present disclosure also triggers the remote gift entrance to present the near gift panel in the virtual space or fold the near gift panel.
- the remote presentation special effects or the remote retracting special effects of the close gift panel can be played in the virtual space.
- the remote body presentation special effect can be the corresponding remote body presentation animation and/or sound effect, etc.
- the remote body retracting special effect can be a remote body retracting animation and/or sound effect that is opposite to the remote body presentation special effect.
- the close gift entrance when first entering the virtual space, the close gift entrance defaults to the first ghost state.
- the first ghost state It can be a ghost style that indicates that the close gift panel is not presented, so as to avoid blocking the user from viewing the live broadcast content of the host.
- the present disclosure can control the hand model to hover over the near gift entrance to activate the near gift entrance. Then, as shown in Figure 23, in response to the hovering operation of the hand model facing the near gift entrance, the near gift entrance can be controlled to transform from the first ghost state to the activated state.
- the close gift portal can emit light and amplify accordingly when it is activated, so that the icon of the close gift portal can change from a virtual shadow to a real one.
- the present disclosure can also control the real handle of the XR device to vibrate accordingly.
- the user confirms that the close gift entrance is selected through the hand model by touching the Trigger key in the real handle, which generates a hover confirmation operation of the hand model facing the close gift entrance.
- the corresponding near gift panel can be presented in the virtual space. That is to say, after detecting the hovering confirmation operation of the hand model facing the close gift entrance, it means that there is currently an interaction between the user and the anchor for giving virtual gifts in the virtual space, so a virtual gift beside the user can be presented in the virtual space.
- a close gift panel so that the hand model can be used to perform corresponding gifting operations for any virtual gift in the close gift panel.
- the present disclosure can also Control the close gift entrance to further transform from the activated state to the second ghost state.
- the second ghost state may be to transform the icon of the close gift entrance into a ghost style indicating that the discreet gift panel has been presented.
- the first ghost state may be a closed virtual gift box
- the second ghost state may be an open virtual gift box.
- the present disclosure needs to close the presented near gift panel in the virtual space. Therefore, in order to ensure the convenient retracting of the close gift panel, and the close gift panel is presented by triggering the close gift entrance located at the user's close position, the present disclosure can trigger the close gift again through the hand model Entrance to instruct the presented near gift panel to be folded in the virtual space.
- the hand model is hovered over the close gift portal in the second ghost state again to activate the close gift portal again. Then, for example, the close gift portal is illuminated and amplified accordingly, so that the icon of the close gift portal can continue to change from a virtual shadow to a real one. Moreover, the present disclosure can still control the real handle of the XR device to vibrate accordingly.
- the user confirms that the close gift entrance is selected again through the hand model by touching the Trigger key in the real handle, which generates a hovering confirmation operation of the hand model facing the close gift entrance again. Furthermore, in response to another hovering confirmation operation of the hand model facing the near gift entrance, the presented near gift panel can be folded in the virtual space.
- the close gift entrance can be controlled to transform back to the first ghost state so that the close gift panel can be presented again later.
- the present disclosure also triggers the close gift entrance to present the close gift panel in the virtual space or when the close gift panel is closed.
- the close presentation special effect or the close closing special effect of the close gift panel can be played in the virtual space.
- the distant body presentation special effects may be corresponding close body presentation animations and/or sound effects, etc.
- the close body retracting special effects may be a close body retracting animation and/or sound effects that are opposite to the close body presentation special effects.
- the close-up presentation special effect can be that after the virtual gift box is triggered through the hand model, the virtual gift box will change from virtual to real and be opened, so that from A beam of light flew out of the virtual gift box. Then, the light beams flying out of the virtual gift box can gradually converge into the shape of a close-up gift panel around the user, and then gradually display each virtual gift. Furthermore, after the close presence special effect is played, the close gift panel can be stably presented around the user, and the corresponding virtual gift is displayed on the close gift panel. Moreover, the virtual gift box can remain opened and gradually change from real to virtual.
- the opened virtual gift box is triggered again through the hand model, causing the virtual gift box to glow and enlarge again, and gradually changes from virtual to real. Then, control the close gift panel to turn into a beam of light, fly away from the user and return to the virtual gift box, and the virtual gift gradually disappears following the close gift panel. Furthermore, after the light beam flies back to the virtual gift box, the virtual gift box is controlled to be closed, and gradually changes from real to virtual, and then changes back to the default first virtual state.
- the present disclosure in order to ensure that users can use it conveniently when giving any virtual gift in the near gift panel, the present disclosure will set an experience mode for the near gift panel so that in the virtual space When the near gift panel is presented, the user is guided to perform a complete gift giving process for the virtual gifts in the near gift panel under the experience model, thereby improving the user's understanding of the near gift panel.
- the process of presenting the close gift panel in the virtual space may include the following steps:
- the user may not understand the gift-giving function of the near-gift panel the first few times (such as the first two times) in the virtual space, the user needs to be guided to perform a complete gift-giving in the experience mode. process to enhance users’ understanding of the near gift panel.
- the close gift panel is subsequently re-presented (for example, starting from the third presentation of the close gift panel), there is no need to guide the user to perform the complete gift giving process in the experience mode.
- the present disclosure can address the guidance requirements of the close gift panel in the experience mode and provide the close gift panel.
- Set a limit on the number of presentations which is the default number of times in this disclosure.
- the preset number of times may be 2 times, which is not limited in this disclosure.
- Each time the close gift panel is presented in the virtual space, the corresponding number of presentations will be recorded to determine whether the current presentation of the close gift panel is within the preset number of times and to determine whether it is necessary to guide the user to experience a gift giving process. .
- the current presentation of the close gift panel in the virtual space is within the preset number of times, it means that this presentation needs to guide the user to experience a complete gift giving process so that the user can accurately understand the gift giving function supported by the close gift panel. Then, the panel template of the close gift panel in the experience mode can be first presented in the virtual space, so that the user enters the experience mode.
- the panel template and the close gift panel have the same style and are both presented around the user.
- the main purpose of the panel template is to guide users to perform a complete gift-giving process, there is no limit to the virtual gifts given. Therefore, one or a small number of virtual gifts can exist in the panel template as the corresponding experience gift, instead of all virtual gifts being displayed like the close gift panel.
- the gift giving process performed by the user in the experience mode is not the actual gift giving performed by the user. Therefore, in the experience mode, the fee display and deduction entrance for the experience gift will be cancelled, so that when the user performs the gift giving process in the experience mode, the corresponding deduction operation will not be performed.
- S620 According to the gift giving guidance information in the experience mode, control the hand model to perform corresponding giving operations on the experience gifts in the panel template.
- corresponding gift-giving guidance information will be displayed for each step of the gift-giving operation performed by the user in the experience mode. Then, according to the gift-giving guidance information under each step, the hand model can be controlled to perform corresponding gift-giving operations for the experience gifts in the panel template. After completing the gift-giving operation in this step, the next step of gift-giving guidance information will continue to be presented, and the hand model will continue to be controlled to perform the corresponding gift-giving operation. Loop in sequence to support users to complete a complete gift giving process under the experience model.
- a gift-giving guidance message will be displayed for the first time as "Currently in the experience mode, there is no charge for giving gifts in this mode.” To remind users that gifts given in trial mode will not be deducted. This gift giving guidance message will automatically disappear after being displayed for 3 seconds. Then, if the user has not performed the current related gift-giving operation after the last guidance message disappears for a corresponding period of time, a guidance message instructing the user to perform the current related gift-giving operation will continue to be presented in the virtual space.
- the guidance information can be text guidance or virtual animation guidance, which is not limited in this disclosure.
- a virtual guidance animation of holding the experience gift through the hand model will be displayed, or during the experience A soft line appears next to the gift connected to a piece of guidance text, which could be "Touch and grab the gift.”
- a virtual guidance animation of throwing the experience gift through the hand model will continue to be displayed, or during the experience A soft line appears next to the gift and is connected to a guide copy.
- the guide copy can be "Throw and release the Grab key to give the gift.”
- the hand model can be used to move the gift in the virtual space through the hand model. Throwing the experience gift you are holding means completing a gift giving process.
- an experience success pop-up window will be displayed in the virtual space.
- the experience success pop-up window can be set with a confirmation control for a successful gift giving experience. Triggering the confirmation control through the hand model can exit the experience mode.
- the exit operation of the experience mode can be generated. Then, in response to the exit operation of the experience mode, the presentation of the panel template is canceled in the virtual space, and the close gift panel is presented in the virtual space, and the corresponding deduction entrance is displayed to perform the corresponding gift giving operation.
- the present disclosure can use the following steps to give the hand model in the virtual space.
- the process for any virtual gift is explained:
- the simulation in the virtual space can be controlled through handle operations or gesture operations.
- the hand model will perform corresponding movements to perform corresponding holding operations on any virtual gift in the near gift panel.
- the hand model will be required to hold the After having any virtual gift, you can perform various movements related to gift giving in the virtual space in order to simulate the actual gift giving process and enhance the diverse interactions of gift giving in the virtual space.
- the user in response to the holding operation of the hand model facing any virtual gift, the user can be supported to input some corresponding motion information in the XR device to indicate the motion performed by the hand model, such as manipulating various direction buttons on the handle. , controlling the handle to perform corresponding movements, or controlling the hand to perform corresponding movement gestures, etc., can represent the movements that the hand model needs to perform after holding any virtual gift. Based on this kind of movement information, movement instructions initiated by the user towards the hand model can be generated.
- the present disclosure also needs to determine in real time the movement posture change amount of the hand model after holding the virtual gift, so as to determine whether the movement posture change amount satisfies the holding requirement. Hold the virtual gift giving trigger conditions.
- the present disclosure can set a presentation trigger condition for the virtual gift in advance for the actual presentation operation of any virtual gift.
- virtual gifts in the virtual space can be divided into throwable gifts and non-throwable gifts to enhance the diversity of gift giving in the virtual space.
- throwable gifts can be some virtual gifts that can be successfully given to the host by directly throwing the hand model in the virtual space, such as individual emoticon gifts, heart-shaped gifts, etc.
- Non-throwable gifts can be stored in the corresponding gift-giving props, which require the hand model to perform corresponding interactive operations with the gift-giving props held, and other virtual gifts that are successfully given to the host, such as through bubbles Bubble gifts with sticks, hot air balloon gifts activated by heating device, etc.
- the trigger condition for giving the throwable gift can be set as follows: the hand model is in the grip cancellation posture after performing a throwing motion or a continuous throwing motion. That is to say, based on the movement posture changes performed by the hand model after holding the throwable gift, it is determined whether the hand model has performed a throwing movement or a continuous throwing movement. Moreover, after executing the above movement, whether to cancel the grip of the throwable gift so that the hand model is finally in the grip cancellation posture. If the above triggering conditions for giving a throwable gift are met, it means that the virtual gift held is thrown out in the virtual space through the hand model, which means that the throwable gift needs to be given to the host in the virtual space.
- the gift trigger of the non-throwable gifts can be set.
- the conditions are: the target part of the hand model that interacts with the gift-giving prop performs the gift-giving operation set by the gift-giving prop.
- the hand model when the hand model is holding a non-throwable gift, it will be represented as the hand model holding a gift booster for storing non-throwable gifts. For example, if you give a bubble gift through a bubble wand, the hand model will hold the bubble wand from the close gift panel to issue the corresponding bubble gift.
- the target part of the hand model that interacts with the gift-giving prop performs the gift-giving operation set by the gift-giving prop. If the above triggering conditions for giving non-throwable gifts are met, it means that the non-throwable gifts in the gift-giving props can be sent out in the virtual space through the corresponding interaction between the target part in the hand model and the gift-giving props, which means that in the This non-throwable gift needs to be given to the host in the virtual space.
- the present disclosure after determining the movement posture change amount performed by the hand model after holding the virtual gift, the present disclosure first determines the gift triggering condition of the held virtual gift. Then, it is determined whether the virtual gift needs to be presented to the host in the virtual space by determining whether the movement posture change amount performed by the hand model after holding the virtual gift satisfies the triggering condition for giving the virtual gift. When the movement posture change meets the triggering condition for giving the virtual gift held, it means that the user instructs the host to give the virtual gift, thereby determining the hand model's giving operation for the virtual gift.
- the giving effect of the virtual gift can be presented in the virtual space.
- the gift operation can be performed once or continuously, so that there are different gift types of virtual gifts. Therefore, when the present disclosure presents the special effect of giving a virtual gift in the virtual space, it will also determine whether the movement performed by the hand model is a one-time gift or a continuous gift based on the change in the movement posture of the hand model after holding the virtual gift. Determine the type of virtual gift to be given. Then, the special effects of the virtual gift under the corresponding gift type can be presented in the virtual space.
- the present disclosure can set a return condition by determining whether the hand model cancels the grip of the virtual gift without performing the corresponding gift-giving operation.
- the virtual gift is controlled to fold back from the hand model to its original position on the near-body gift panel.
- the present disclosure can determine whether the movement posture change meets the return condition of the virtual gift, and determine whether the user actively gives up giving the virtual gift.
- the movement posture change amount indicates that the hand model performs the grip cancellation operation without performing the operation of giving a virtual gift, it means that the movement posture change amount satisfies the return condition of the virtual gift. Therefore, in the virtual space, the virtual gift that is no longer held by the hand model can be controlled to be folded back from the position of the hand model to the original position of the virtual gift on the gift panel.
- the hand model cancels the grip of the virtual gift there may be two situations: 1) The hand model directly cancels the grip of the virtual gift at any movement point after performing the corresponding movement. Holding, that is, after the hand model performs the corresponding movement, the hand model's grip cancellation operation on the virtual gift is performed in situ at any movement point. 2) Use the hand model to drive the held virtual gift to perform corresponding movements, and after returning to the original position on the gift panel, cancel the grip of the virtual gift, that is, use the hand model to drive the virtual gift from any movement After the point returns to the original position on the gift panel, perform the hand model's grip cancellation operation on the virtual gift.
- the present disclosure can set a first homing condition, and the first homing condition can be: the hand model performs a grip cancellation operation at any motion point when holding the virtual gift. That is to say, the hand model moves to any motion point after holding the virtual gift, and the grip cancellation operation is performed in situ at the motion point. Therefore, when the movement posture change amount of the hand model when holding the virtual gift meets the first return condition, the present disclosure can control the virtual gift to perform a preset vertical movement downward from the hand model and then return to the gift panel. at its original position.
- the virtual gift in order to simulate the gravitational influence of the virtual gift after being released by the hand model, the virtual gift can be controlled to perform a preset vertical movement downward for a short period of time.
- the downward movement distance of the preset vertical movement can be determined based on the height of the hand model's position when it cancels grip on the virtual gift and the height of the position where the gift panel is located. Normally, the height of the hand model when it releases the grip on the virtual gift is greater than the height of the gift panel. For example, assuming that the height of the hand model when it cancels the grip of the virtual gift is A, and the height of the gift panel is B, then the preset downward movement distance of the vertical movement can be 0.2*(A-B). Then, after the virtual gift completes the preset vertical movement downward, the virtual gift is controlled to return to its original position on the gift panel.
- the present disclosure can set a second homing condition.
- the second homing condition can be: after the hand model holds the virtual gift and moves it until the virtual gift is above the original position on the gift panel, execution grip Hold the cancellation operation. That is to say, after the hand model moves to return to above its original position on the gift panel after holding the virtual gift, the grip cancellation operation is performed above the original position. Therefore, when the movement posture change amount of the hand model when holding the virtual gift meets the second return condition, the present disclosure can control the virtual gift to be folded back to the original position on the gift panel from the current position.
- the hand model drives the virtual gift back to its original position on the gift panel, it can directly control the virtual gift from its current position above the original position after the virtual gift is cancelled. Fold back to its original position on the gift panel.
- this disclosure when holding any virtual gift through the hand model, in order to ensure the effective interaction of the hand model with the virtual gift, this disclosure will set a preset upper limit for holding the virtual gift for the holding time of the hand model. . If when the hand model holds the virtual gift for a period of time that reaches the preset upper limit of holding, the movement posture change amount of the hand model when holding the virtual gift does not meet the triggering condition or return condition of the virtual gift, then you can The giving guidance information of the virtual gift is presented once in the virtual space. The giving guidance information is used to instruct the giving operation that needs to be performed when giving the virtual gift in the virtual space through the hand model, so as to guide the user to control the hand model to The held virtual gift accurately performs the corresponding gifting operation.
- FIG. 27 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure.
- the human-computer interaction device 900 can be configured in an XR device.
- the human-computer interaction device 900 includes:
- the gift panel presentation module 910 is configured to respond to the triggering operation of the gift entrance in the virtual space and present the corresponding close gift panel in the virtual space;
- the gift giving module 920 is configured to respond to the hand model in the virtual space facing the giving operation of any virtual gift in the close gift panel, and present the giving special effect of the virtual gift in the virtual space.
- the gift panel presentation module 910 can be used to:
- the remote gift entrance is located in any virtual screen in the virtual space.
- the virtual gift interactive device 900 may also include:
- the first retracting module is configured to retract the close gift panel in the virtual space in response to the hand model facing the triggering operation of the retract control in the close gift panel.
- the virtual gift interactive device 900 may also include:
- a remote special effects playback module used to play the remote presentation characteristics of the close gift panel in the virtual space. effect or close the special effect from a distance.
- the gift panel presentation module 910 can also be used to:
- the corresponding close gift panel is presented in the virtual space
- the close gift entrance is located at a position close to the user in the virtual space, and is in the first ghost state by default.
- the virtual gift interactive device 900 may also include:
- a state change module configured to control the close gift portal to change from the ghost state to an activated state in response to a hovering operation of the hand model facing the close gift portal.
- the state transformation module can also be used to:
- the virtual gift interactive device 900 may also include:
- the second retracting module is configured to retract the close gift panel in the virtual space in response to the hand model hovering again to confirm the close gift entrance.
- the virtual gift interactive device 900 may also include:
- a close-up special effects playback module is used to play the close-up presentation special effects or the close-up retracting special effects of the close-up gift panel in the virtual space.
- the state transformation module can also be used to:
- the gift panel presentation module 910 can also be used to:
- the corresponding close gift panel is presented at a predetermined position in the virtual reality space, and the distance between the predetermined position and the user meets the predetermined distance requirement.
- the gift giving module 920 can be used to:
- the live content display area, the gift giving safety area and the close gift panel in the virtual space are in different spatial planes, and are located in the virtual space according to a preset distance and area size. distributed sequentially within.
- the special effects of the virtual gift include: a spatial throwing trajectory of the virtual gift in the virtual space, and a throwing special effect set based on the spatial throwing trajectory or/and the virtual gift. .
- the hand model in the virtual space is oriented to the giving operation of any virtual gift in the close gift panel, which is determined by the giving operation determination module.
- This gift operation determination module can be used for:
- the human-computer interaction device 900 may also include:
- a gift homing module configured to control the virtual gift to fold back from the hand model to the close gift panel in the virtual space when the movement posture change meets the homing condition of the virtual gift. at its original position.
- the human-computer interaction device 900 may also include:
- a gift guidance module used for if the duration of holding the virtual gift by the hand model reaches the preset upper limit of holding, the movement posture change has not yet satisfied the gift triggering condition or return condition of the virtual gift. , then the virtual gift giving guidance information is presented in the virtual space.
- the gift panel presentation module 910 can also be used to:
- control the hand model to perform corresponding gifting operations on the experience gifts in the panel template
- the exit information of the experience model is presented in the virtual space
- the close gift panel is presented in the virtual space.
- the human-computer interaction device 900 may also include:
- a panel page turning module is used to control the page turning of the near gift panel in response to the page turning operation of the near gift panel, and to present new virtual gifts following the page turning of the near gift panel.
- the page turning operation of the close gift panel includes at least one of the following:
- the hand model faces the dragging and page turning operation of the close gift panel
- the hand model faces the triggering operation of the page turning control in the close gift panel
- the hand model faces the toggle operation of the rocker assembly in the handle model.
- the human-computer interaction device 900 may also include:
- a vibration module is used to control the XR device to perform different degrees of vibration according to different interactive operations performed by the hand model toward any virtual object in the virtual space.
- the corresponding close gift panel in response to the triggering operation of the gift entrance in the virtual space, can be presented in the virtual space.
- the close gift panel is placed next to the user and supports the user to use the hand model to select the close gift. Any virtual gift in the panel performs related gift operations. Therefore, in response to the hand model's operation for giving any virtual gift, the special effect of the virtual gift can be presented in the virtual space. There is no need to select a virtual gift through the cursor ray and then use the handle Trigger key to present the corresponding gift.
- the device 900 shown in Figure 27 can execute any method embodiment provided by the present disclosure, and the foregoing and other operations and/or functions of each module in the device 900 shown in Figure 27 are respectively to implement the above method embodiments. The corresponding process will not be repeated here for the sake of brevity.
- the software module may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
- Figure 28 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device 1000 may include:
- Memory 1010 and processor 1020 are used to store computer programs and transmit the program code to the processor 1020.
- the processor 1020 can call and run the computer program from the memory 1010 to implement the method in the embodiment of the present disclosure.
- the processor 1020 may be configured to execute the above method embodiments according to instructions in the computer program.
- the processor 1020 may include, but is not limited to:
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the memory 1010 includes, but is not limited to:
- Non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
- RAM Random Access Memory
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM DDR SDRAM
- ESDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous link dynamic random access memory
- Direct Rambus RAM Direct Rambus RAM
- the computer program can be divided into one or more modules, and the one or more modules are stored in the memory 1010 and executed by the processor 1020 to complete the tasks provided by the present disclosure.
- the one or more modules may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program on the electronic device 1000 .
- the electronic device may also include:
- Transceiver 1030 which may be connected to the processor 1020 or the memory 1010.
- the processor 1020 can control the transceiver 1030 to communicate with other devices. For example, it can send information or data to other devices, or receive information or data sent by other devices.
- Transceiver 1030 may include a transmitter and a receiver.
- the transceiver 1030 may further include an antenna, and the number of antennas may be one or more.
- bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
- the present disclosure also provides a computer storage medium on which a computer program is stored.
- the computer program When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
- An embodiment of the present disclosure also provides a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiment.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
- the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as digital video discs (DVD)), or semiconductor media (such as solid state disks (SSD)), etc.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
- Examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory Memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any of the above. Find the right combination.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
- Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
- the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communications e.g., communications network
- communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs.
- the electronic device responds to the user's operation of entering the virtual reality space, and displays the virtual screen in the virtual reality space.
- Virtual reality video information is presented in the virtual reality space, and multiple layers are displayed in multiple spatial location areas in the virtual reality space. Each spatial location area has a different display distance from the user's current viewing position and is located in front of the virtual screen.
- Video information refers to virtual reality video information.
- the structured display of video-related layers between the virtual screen and the user's current viewing position is realized.
- the depth information in the virtual reality space is fully utilized. A three-dimensional display effect is achieved, which helps improve the user's viewing experience.
- the electronic device may have computer program code for performing operations of the present disclosure written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof. , also includes conventional procedural programming languages—such as "C" or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider such as an Internet service provider through Internet connection
- each box in the flowchart or block diagram It may represent a module, program segment, or part of code that contains one or more executable instructions for implementing specified logical functions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
- the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs Systems on Chips
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media examples include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory flash memory
- optical fiber portable compact disk read-only memory
- CD-ROM compact disk read-only memory
- magnetic storage device magnetic storage device
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种基于虚拟现实空间的显示处理方法、装置、设备及介质,方法包括:响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;在虚拟现实空间中的多个空间位置区域分别显示多个图层,其中,多个空间位置区域中的各个空间位置区域与用户的当前观看位置的显示距离不同,且均位于虚拟屏幕前方。一种基于虚拟现实空间的模型显示方法、装置、设备及介质,方法包括:响应于礼物显示指令,生成与礼物显示指令对应的目标礼物模型;在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层,其中,多个图层中的各个图层对应的空间位置区域与用户的当前观看位置的显示距离不同;在目标显示图层对应的目标空间位置区域上显示目标礼物模型。一种人机交互方法、装置、设备和存储介质,方法包括:响应于虚拟空间内礼物入口的触发操作,在虚拟空间内呈现礼物入口对应的近身礼物面板;响应于虚拟空间内手部模型面向近身礼物面板内任一虚拟礼物的赠送操作,在虚拟空间内呈现虚拟礼物的赠送特效。
Description
相关申请的交叉引用
本公开是以CN申请号为202210993897.7,申请日为2022年8月18日的申请,CN申请号为202210995392.4,申请日为2022年8月18日的申请,CN申请号为202211124457.4,申请日为2022年9月15日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本公开中。
本公开涉及虚拟现实技术、通信技术和扩展现实(Extended Reality,XR)技术领域,尤其涉及一种基于虚拟现实空间的显示处理方法、装置、设备及介质,基于虚拟现实空间的模型显示方法、装置、设备及介质,以及一种人机交互方法、装置、设备和存储介质。
虚拟现实(Virtual Reality,VR)技术,又称虚拟环境、灵境或人工环境,是指利用计算机生成一种可对参与者直接施加视觉、听觉和触觉感受,并允许其交互地观察和操作的虚拟世界的技术。
相关技术中,可基于虚拟现实技术显示直播视频等,在显示视频的过程中,如何充分利用虚拟现实空间中的深度信息,以提升用户的交互体验为一种主流需求。
相关技术中,在现实世界中显示直播视频时,用户可以通过点击礼物控件实现礼物的发送等,因此,如何实现在虚拟现实空间中的礼物发送互动,为提升虚拟现实世界中的视频观看的真实感具有重要意义。
目前,XR技术的应用场景越来越广泛了,包含虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)等。在虚拟直播场景下,通过XR技术可以使用户能够沉浸式观看各种虚拟直播画面,例如用户可以通过佩戴头戴式显示器(Head Mounted Display,HMD),来体验真实的直播互动场景。
通常情况下,观众可以向所喜欢的主播进行点赞、评论、赠送虚拟礼物等,来增
强虚拟直播场景下的用户互动。
发明内容
本公开实施例提供了一种基于虚拟现实空间的显示处理方法,所述方法包括以下步骤:响应于用户进入虚拟现实空间的操作,在所述虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;且,在所述虚拟现实空间中的多个空间位置区域分别显示多个图层;其中,各个所述空间位置区域与用户当前观看位置的显示距离不同,且位于所述虚拟屏幕前方。本公开实施例还提供了一种基于虚拟现实空间的显示处理装置,所述装置包括:显示处理模块,用于响应于用户进入虚拟现实空间的操作,在所述虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;且,在所述虚拟现实空间中的多个空间位置区域分别显示多个图层;其中,各个所述空间位置区域与用户当前观看位置的显示距离不同,且位于所述虚拟屏幕前方。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的基于虚拟现实空间的显示处理方法。
本公开实施例还提供了一种非易失性计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的基于虚拟现实空间的显示处理方法、基于虚拟现实空间的模型显示方法、或者人机交互方法。
本公开实施例还提供了一种计算机程序,包括:
指令,所述指令当由处理器执行时使所述处理器执行根据如本公开实施例提供的基于虚拟现实空间的显示处理方法、基于虚拟现实空间的模型显示方法、或者人机交互方法。
本公开实施例提供了一种基于虚拟现实空间的模型显示方法,所述方法包括:响应于礼物显示指令,生成与所述礼物显示指令对应的目标礼物模型;在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同;在所述目标显示图层对应的目标空间位置区域上显示所述目标礼物模型。
本公开实施例还提供了一种基于虚拟现实空间的模型显示装置,所述装置包括:生成模块,用于响应于礼物显示指令,生成与所述礼物显示指令对应的目标礼物模型;
确定模块,用于在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同;显示模块,用于在所述目标显示图层对应的目标空间位置区域上显示所述目标礼物模型。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的基于虚拟现实空间的模型显示方法。
本公开实施例还提供了一种非易失性计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的基于虚拟现实空间的模型显示方法。
第一方面,本公开实施例提供了一种人机交互方法,应用于XR设备,该方法包括:
响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现对应的近身礼物面板;
响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
第二方面,本公开实施例提供了一种人机交互装置,配置于XR设备,该装置包括:
礼物面板呈现模块,用于响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现对应的近身礼物面板;
礼物赠送模块,用于响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
第三方面,本公开实施例提供了一种电子设备,该电子设备包括:
处理器和存储器,该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行本公开第一方面中提供的人机交互方法。
第四方面,本公开实施例提供了一种非易失性计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行如本公开第一方面中提供的人机交互方法。
第五方面,本公开实施例提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令使得计算机执行如本公开第一方面中提供的人机交互方法。
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种虚拟现实设备的应用场景示意图;
图2为本公开实施例提供的一种基于虚拟现实空间的显示处理方法的流程示意图;
图3为本公开实施例提供的一种基于虚拟现实空间的显示场景示意图;
图4为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图5为本公开实施例提供的另一种基于虚拟现实空间的显示处理方法的流程示意图;
图6为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图7为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图8为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图9为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图10A为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图10B为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图11为本公开实施例提供的另一种基于虚拟现实空间的显示场景示意图;
图12为本公开实施例提供的一种基于虚拟现实空间的显示处理装置的结构示意图;
图13为本公开实施例提供的一种电子设备的结构示意图。
图14为本公开实施例提供的一种基于虚拟现实空间的模型显示方法的流程示意图;
图15为本公开实施例提供的一种基于虚拟现实空间的模型显示场景示意图;
图16为本公开实施例提供的另一种基于虚拟现实空间的模型显示方法的流程示意图;
图17为本公开实施例提供的一种多个图层之间的层级结构示意图;
图18为本公开实施例提供的一种基于虚拟现实空间的模型显示装置的结构示意图;
图19为本公开实施例提供的一种人机交互方法的流程图;
图20为本公开实施例提供的在虚拟空间内呈现近身礼物面板的示意图;
图21(A)、图21(B)和图21(C)分别为本公开实施例提供的在虚拟空间内对近身礼物面板进行翻页的不同示例性示意图;
图22为本公开实施例提供的通过触发远身礼物入口,在虚拟空间内呈现近身礼物面板的示意图;
图23为本公开实施例提供的通过触发近身礼物入口,在虚拟空间内呈现近身礼物面板的示意图;
图24为本公开实施例提供的在虚拟空间内呈现近身礼物面板的方法流程图;
图25为本公开实施例提供的在体验模型下呈现礼物赠送引导信息的示意图;
图26为本公开实施例提供的通过手部模型在虚拟空间内赠送任一虚拟礼物的方法流程图;
图27为本公开实施例提供的一种人机交互装置的示意图;
图28是本公开实施例提供的电子设备的示意性框图。
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相
互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种基于虚拟现实空间的显示处理方法、装置、设备及介质,实现了在虚拟屏幕和用户当前观看位置之间对视频有关图层的结构化显示,通过图层与用户当前观看位置的距离的层级设置,充分利用了虚拟现实空间中的深度信息,实现了立体化的显示效果,有助于提升用户的观看体验。
对本文中涉及到的一些技术概念或者名词概念进行相关说明:
虚拟现实设备,实现虚拟现实效果的终端,通常可以提供为眼镜、头盔式显示器(Head Mount Display,HMD)、隐形眼镜的形态,以用于实现视觉感知和其他形式的感知,当然虚拟现实设备实现的形态不限于此,根据需要可以进一步小型化或大型化。
本公开实施例记载的虚拟现实设备可以包括但不限于如下几个类型:
电脑端虚拟现实(PCVR)设备,利用PC端进行虚拟现实功能的相关计算以及数据输出,外接的电脑端虚拟现实设备利用PC端输出的数据实现虚拟现实的效果。
移动虚拟现实设备,支持以各种方式(如设置有专门的卡槽的头戴式显示器)设置移动终端(如智能手机),通过与移动终端有线或无线方式的连接,由移动终端进行虚拟现实功能的相关计算,并输出数据至移动虚拟现实设备,例如通过移动终端的APP观看虚拟现实视频信息。
一体机虚拟现实设备,具备用于进行虚拟功能的相关计算的处理器,因而具备独立的虚拟现实输入和输出的功能,不需要与PC端或移动终端连接,使用自由度高。
虚拟现实对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如直播场景下的用户对应的虚拟人。
如图1所示,HMD为相对较轻的、在人体工程学上舒适的,并且提供具有低延迟的高分辨率内容。虚拟现实设备中设置有姿态检测的传感器(如九轴传感器),用于实时检测虚拟现实设备的姿态变化,如果用户佩戴了虚拟现实设备,那么当用户头部
姿态发生变化时,会将头部的实时姿态传给处理器,以此计算用户的视线在虚拟环境中的注视点,根据注视点计算虚拟环境的三维模型中处于用户注视范围(即虚拟视场)的图像,并在显示屏上显示,使人仿佛在置身于现实环境中观看一样的沉浸式体验。
在一些实施例中,当用户佩戴HMD设备并打开预定的应用程序时,如视频直播应用程序时,HMD设备会运行相应的虚拟场景,该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟场景,还可以是纯虚构的虚拟场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本公开实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括人物、天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动,还可以通过手柄设备等操控设备、裸手手势等方式来对虚拟场景中的控件、模型、展示内容、人物等等进行交互控制。
为了充分利用虚拟现实空间中的深度信息,提升用户观看视频时的交互体验,本公开实施例提供了一种基于虚拟现实空间的显示处理方法,下面结合实施例对该方法进行介绍。
图2为本公开实施例提供的一种基于虚拟现实空间的显示处理方法的流程示意图,该方法可以由基于虚拟现实空间的显示处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图2所示,该方法包括以下步骤:
步骤S201,响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;且,在虚拟现实空间中的多个空间位置区域分别显示多个图层;
其中,各个空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供的基于虚拟现实空间的显示处理方案,响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息,且,在虚拟现实空间中的多个空间位置区域分别显示多个图层,其中,各个空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方。由此,实现了在虚拟屏幕和用户当前观看位置之间对视频有关图层的结构化显示,通过图层与用户当前观看位置的距离的层级设置,充分利用了虚拟现实空间中的深度信息,实现了立体化的显示效果,有助于提升用户的观看体验。
在现实世界中,直播视频流在进行播放时,在视频界面显示各种各样的信息,比如,可能会显示弹幕信息、显示礼物信息等。在一些实施例中,充分利用虚拟现实空间中的深度信息,将不同的信息根据场景需求拆分在不同的图层中显示,以实现层级显示效果。
在一些实施例中,获取用户进入虚拟现实空间的操作,其中,该操作可以是检测到用户打开虚拟显示设备的开关操作等,进而,响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息,其中,虚拟屏幕对应于在虚拟场景中预先搭建以显示有关视频的画布,在虚拟屏幕内呈现虚拟现实视频信息,比如,在虚拟屏幕内呈现直播视频或者是线上演唱会视频等。
在一些实施例中,在虚拟现实空间中的多个空间位置区域分别显示多个图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方,即在用户当前观看位置与虚拟屏幕之间错落显示多个图层以用于显示不同的图层信息,在视觉上呈现图层信息显示的立体感,其中,利用这种图层的层级结构可以将一些对场景中对用户的注意力需求高的图层信息显示在显示距离较近的空间位置区域,将一些对用户注意力的需求不高的其他图层信息显示在显示距离较远的空间位置区域中,充分利用虚拟现实空间中的深度信息,实现了对图层消息在显示维度上的“强化显示”和“弱化显示”等,提升用户的观看体验。
需要说明的是,上述多个图层对应的各个空间位置区域之间在垂直轴上的垂直坐标值不同,实现用户视觉上的“远近”错落显示,另外,为了避免图层之间的显示遮挡,各个空间位置区域在水平坐标值上对应的坐标值可以不完全相同(即前后图层之间包含在X轴方向上不重叠的图层部分),即在水平方向上“错落”排布多个图层,或者是,各个空间位置区域在竖直坐标值上对应的坐标值可以不完全相同(即前后图层之间包含在Y轴方向上不重叠的图层部分),即在竖直方向上“错落”排布多个图层,或者,也可以同时在水平方向和竖直方向上“错落”排布多个图层等。
举例而言,如图3所示,当应用场景为直播视频流的播放场景,则在当前播放的虚拟视频帧的前方依次为对应的图层的空间位置显示区域,不同图层之间距离用户当前观看位置的显示距离不同,用户在虚拟现实空间中观看时,显示距离较近的图层显然距离用户的人眼较近,用户更容易注意到,比如,利用这种层级结构关系将一些操作界面类型的图层设置在最靠近用户人眼的位置,显然能够更加方便用户的操作等。
继续参照图3,为了避免图层之间的前后遮挡,多个图层所对应的空间位置区域
的水平坐标值不完全相同,且,多个图层所对应的空间位置区域的竖直坐标值不完全相同,从而,如图4所示,观看用户可以在虚拟屏幕前方看到多个图层上的图层消息。
需要强调的是,这种图层之间的层级结构对于观看用户来说,仅仅对不同图层的注意敏感度不同,继续参照图4所示,在视觉上观看用户并不能直观的看到不同图层的目标显示位置之间的层级结构关系,只能感受到不同图层之间内容上的远近等,由于用户倾向于关注更靠近自己的信息,因此,提升了对最靠近用户的图层上信息的注意力。
下面参照实施例描述,在一些可能的示例中,如何在虚拟现实空间中显示多个图层,如图5所示,在虚拟现实空间中的多个空间位置区域分别显示多个图层,包括:
步骤501,获取与虚拟现实视频信息对应的多个图层的图层类型。
其中,虚拟现实视频信息是在虚拟现实空间中的虚拟场景中显示的视频流,包括但不限于直播视频流、演唱会视频流等。
在一些实施例中,获取与虚拟现实视频信息对应的多个图层的图层类型,其中,图层类型可根据场景标定,包括但不限于操作用户界面图层、信息流显示图层、礼物显示图层、表情显示图层等中的多种,其中,每个图层类型下可包括至少一种子图层等,比如对于操作用户界面图层而言可包括操控版图层等,在此不再一一列举。
步骤502,根据图层类型确定多个图层在虚拟现实空间中的多个空间位置区域。
在一些实施例中,根据图层类型确定多个图层在虚拟现实空间中的多个空间位置区域,各个空间位置区域与用户当前观看位置的显示距离不同,由此,在视觉上不同图层类型距离用户的远近不同,通常用户最先注意到距离其较近的图层上的内容,因此,可根据场景需要,利用上述图层之间的层级结构显示不用的信息,提升用户的观看体验。
在一些实施例中,可根据用户视线方向的变化信息调节图层的显示向量,使得图层上的有关信息始终面向用户显示。其中,用户视线变化信息包括用户视角的变化方向和变化角度等。
举例而言,如图6所示,当检测到用户的视线方向由S1变化为的S2时,则控制多个图层A、B、C由正面S1的方向旋转到正面S2的方向,由此,实现了图层的显示空间位置区域的视线跟随效果。
步骤503,根据空间位置区域对对应图层中的图层消息进行显示处理。
在一些实施例中,在确定空间位置区域后,对根据空间位置区域对对应图层中的
图层消息进行显示处理,即在实际显示过程中,如图7所示,根据空间位置区域对不同的图层上的图层消息进行显示,即将图层消息显示在对应的图层的空间位置区域上即可,并不需要渲染出对应的图层(图中仅仅示出两个图层的图层消息的显示),由此,降低了图层之间的遮挡,提升了观看体验。其中,正如以上所提到的,为了进一步避免图层之间的前后遮挡,不同的图层的图层消息的显示位置可以“错开”显示,或者,不同的图层的空间位置区域可尽量“错开”,以降低前后图层之间的重叠部分区域等。
需要说明的是,根据空间位置区域对对应图层中的图层消息进行显示处理时,只需要保证对应的图层消息在对应的空间位置区域内显示即可,显示方式以及显示内容等不作限制,可根据场景需要标定。
在一些实施例中,响应于获取到预设第一类型的第一图层消息,该第一类型的第一图层消息可以认为是可实时显示的消息,比比如,为用户发送的“礼物消息”等,在获取到第一图层消息后,确定与第一图层消息对应的第一图层。
比如,可根据场景设定的评判指标确定第一图层消息的第一显示等级,获取可显示预设第一类型消息的每个图层的第二显示等级,确定与第一显示等级匹配的第二等级对应的图层为第一图层等,当然,例如,也可采用其他方式来确定第一图层,在此不一一列举。
例如,在第一图层对应的空间位置区域中显示第一图层消息,从而,充分利用虚拟现实空间中的深度信息对不同的图层消息进行显示位置的确定。
其中,第一图层消息在第一图层对应的空间位置区域中的显示位置等可以是随机的,也可以是根据有关场景需要设置的等。
在一些实施例中,响应于获取到预设第二类型的第二图层消息,其中,第二类型的第二图层消息可以为可不实时显示的图层消息,比如,在直播场景中,由于可能同时存在多个用户发送礼物消息的行为,因此,为了便于提升直播间的氛围感和使得用户直观的看到其发送的第二图层消息,通常将礼物消息以消息队列的形式显示,这种礼物消息即可看作第二类型的第二图层消息。
在一些实施例中,将第二图层消息加入与第二图层消息对应的第二图层的图层消息队列,其中,图层队列中的图层消息按照在消息队列中的顺序在第二图层的空间位置区域显示。
在一些实施例中,可以对消息队列中的图层消息进一步划分图层消息子类型,在
第二图层的空间位置区域中确定与每个图层消息子类型对应的显示子区域,从而,在显示第二图层的图层消息队列中的图层消息时,根据图层消息的图层消息子类型,在第二图层的空间位置区域中对应的显示子区域显示对应的图层消息。即同一个图层类型下的图层消息可共用一个空间位置区域,避免层级结构较为复杂影响用户的观看体验。
举例而言,当第二图层消息为礼物消息、第二图层为图层A时,如图8所示,可将礼物消息划分为半屏礼物消息子类型以及全屏消息子类型,其中,半屏礼物消息子类型对应的显示子区域为X的左半部分,全屏礼物消息子类型对应的显示子区域为X的全部,因此,当获取到当前消息队列中待显示的礼物消息为半屏礼物消息a时,则在X的左半部分确定a的显示位置,当获取到当前消息队列中待显示的礼物消息为全屏礼物消息b时,则在X上确定b的显示位置。
综上,本公开实施例的基于虚拟现实空间的显示处理方法,响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息,且在虚拟现实空间中的多个空间位置区域分别显示多个图层,,其中,各个空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方。由此,实现了在虚拟屏幕和用户当前观看位置之间对视频有关图层的结构化显示,通过图层与用户当前观看位置的距离的层级设置,充分利用了虚拟现实空间中的深度信息,实现了立体化的显示效果,有助于提升用户的观看体验。
基于上述实施例,为了充分利用虚拟显示空间中的深度信息,还可以基于场景需要确定每个图层的图层优先级,基于优先级的高低来确定不同图层的空间位置区域,按照场景中消息的重要性等指标,将需要用户注意力的程度将图层消息划分到不同的图层来显示,进一步提升了用户的观看体验。
在一些实施例中,根据图层类型确定多个图层在虚拟现实空间中的多个空间位置区域,包括:根据多个图层类型确定多个图层的优先级信息,进而,根据优先级信息确定多个图层在虚拟现实空间中的多个空间位置区域。
其中,优先级信息越高,则对应的图层中显示的图层消息越需要引起用户的注意力,因此,对应的目标空间区域与用户观看位置的距离越近。图层类型对应的优先级信息可根据场景需要标定,同样的图层类型在不同的场景中对应的优先级信息可能不同等。
举例而言,当应用场景为虚拟现实空间中的直播场景时,如图9所示,若是多个
图层对应的图层类型包括:操作用户界面图层(包括操控版图层)、信息流图层(包括公屏图层(用于显示评论信息等))、礼物显示图层(包括托盘图层(用于显示托盘礼物)、主客态礼物图层(图中为半屏礼物图层和全屏礼物图层,用于显示当前直播视频的任意观看用户发送的全屏礼物或者是半屏礼物等)、客态礼物图层(用于显示其他观看用户发送的飞行类礼物等、包括主态发射图层(用于显示当前观看用户发射的相关礼物等)、表情显示图层(包括主态发射图层(用于显示当前观看用户发射的相关表情等))、客态表情图层类型(用于显示其他观看用户发送的表情等)),则图中的多个图层类型的优先级信息为操控图层=公屏图层>主态发射图层>托盘图层>全屏礼物图层=半屏礼物图层>客态礼物图层>客态表情图层,多个图层对应的多个空间位置区域距离观看用户的显示距离不同,优先级越高的图层类型的空间位置区域越靠近观看用户,由此,由于操控图层最靠近当前观看用户,便于用户执行互动操作,且主态发射图层也距离当前观看用户较近,因此,便于当前观看用户最先注意到本人发送的有关信息。
需要说明的是,在实际执行过程中,在不同的应用场景中,根据优先级信息确定多个图层在虚拟现实空间中的多个空间位置区域的方式不同,示例说明如下:
在一些实施例中,考虑到虚拟场景中的虚拟视频是相对固定显示在虚拟场景中的,因此,对应的图层的空间位置区域也相对固定设置,在预设数据库中存储不同优先级信息对应的空间位置区域,从而,根据优先级信息查询预设数据库,以获取与多个图层对应的多个空间位置区域。
在一些实施例中,考虑到对应的图层的设置视频显示位置的前方,因此,在一些实施例中,确定虚拟现实视频信息在虚拟现实空间中的视频显示位置,该视频显示位置可以根据搭建好的显示视频的画布位置(即虚拟屏幕所在位置)确定,进而,以视频显示位置为起点,根据预设距离间隔和优先级信息由低到高的顺序向靠近用户当前观看位置的方向,逐个确定每个图层的空间位置区域,以确定多个图层的多个空间位置区域。其中,预设距离间隔可根据实验数据标定。
在一些实施例中,如图10A所示,若是确定出虚拟现实视频信息在虚拟现实空间中的视频显示位置为P0,预设距离间隔为J1,多个图层按照优先级信息由高到低的顺序分别为L1、L2、L3、L4,则向靠近用户当前观看位置的方向,确定L4的空间位置区域位于P0之前的J1处P1,向靠近用户当前观看位置的方向,确定L3的空间位置区域位于P1之前的J1处P2,向靠近用户当前观看位置的方向,确定L2的空间位
置区域位于P2之前的J1处P3,向靠近用户当前观看位置的方向,确定L1的空间位置区域位于P3之前的J1处P4。
在一些实施例中,考虑到对应的图层的设置视频显示位置观看用户的用户视线方向,因此,在一些实施例中,根据优先级信息确定最高优先级对应的目标图层,确定目标图层在虚拟现实空间中的空间位置区域,其中,目标图层在虚拟现实空间中的空间位置区域的确定方式在不同的应用场景中不同,在一些可能的实现方式中,可确定距离在用户视线方向距离用户当前观看位置预设距离阈值处为空间位置区域;在一些可能的实现方式中,可确定用户当前观看位置和虚拟现实视频信息在虚拟现实空间中的视频显示位置之间的总距离,以视频显示位置为起点,确定总距离的预设比例阈值处为空间位置区域,由此避免显示的图层距离观看用户过近,使得用户的观看视角范围内观看到的信息有限。
例如,在确定出目标图层的空间位置区域后,以空间位置区域为起点,根据预设距离间隔和优先级信息由高到低的顺序向远离用户当前观看位置的方向,逐个确定其他每个图层的空间位置区域,以确定多个图层的多个空间位置区域。其中,预设距离间隔可以和上述实施例中的预设距离间隔相同,也可以不同,可根据场景需要设置。
举例而言,在一些实施例中,如图10B所示,预设距离间隔为J2,多个图层按照优先级信息由高到低的顺序分别为L1、L2、L3、L4,目标图层为L1,则确定用户当前观看位置为D,则向远离用户当前观看位置的方向,确定距离D为J3处Z1作为L1的空间位置区域,向远离用户当前观看位置的方向,确定L2的空间位置区域位于Z1之前的J2处Z2,向远离用户当前观看位置的方向,确定L3的空间位置区域位于Z2之前的J2处Z3,向远离用户当前观看位置的方向,确定L4的空间位置区域位于Z3之前的J2处Z4。
需要说明的是,上述实施例中提到的空间位置区域对应的位置范围可根据空间位置区域的形状等来确定,同一个图层对应的空间位置区域可以是连续的,也可以是如上述图9所示的分开为多个模块,在一些实施例中,考虑到虚拟现实空间的空间特点,如图11所示,每个图层为圆弧形区域(图中的非阴影区域),圆弧形区域位于观看用户的视场角范围内,每个图层对应的空间位置区域位于对应的圆弧形区域上(图中示出两个图层),每个图层对应的圆弧形区域的圆心位置位于用户当前观看位置对应的用户视线方向上,从而,观看用户在观看时的观看立体感更强。
综上,本公开实施例的基于虚拟现实空间的显示处理方法,基于图层的优先级信
息来确定图层的空间位置区域,不同图层的空间位置区域与用户当前观看位置的显示距离不同,通过结构化的图层设置,满足了对不同图层信息的层级显示,充分利用了虚拟现实空间中的深度信息,实现了立体化的显示效果。
为了实现上述实施例,本公开还提出了一种基于虚拟现实空间的显示处理装置。
图12为本公开实施例提供的一种基于虚拟现实空间的显示处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中进行基于虚拟现实空间的显示处理。如图12所示,该装置包括:显示处理模块1210,其中,
显示处理模块1210,用于响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;且,在虚拟现实空间中的多个空间位置区域分别显示多个图层;
其中,各个空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方。
本公开实施例所提供的基于虚拟现实空间的显示处理装置可执行本公开任意实施例所提供的基于虚拟现实空间的显示处理方法,具备执行方法相应的功能模块和有益效果,其实现原理类似,在此不再赘述。
为了实现上述实施例,本公开还提出一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述实施例中的基于虚拟现实空间的显示处理方法。
图13为本公开实施例提供的一种电子设备的结构示意图。
下面参考图13,其示出了适于用来实现本公开实施例中的电子设备1300的结构示意图。本公开实施例中的电子设备1300可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图13示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图13所示,电子设备1300可以包括处理器(例如中央处理器、图形处理器等)1301,其可以根据存储在只读存储器(ROM)1302中的程序或者从存储器1308加载到随机访问存储器(RAM)1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有电子设备1300操作所需的各种程序和数据。处理器1301、ROM 1302以及RAM 1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线
1304。
通常,以下装置可以连接至I/O接口1305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1307;包括例如磁带、硬盘等的存储器1308;以及通信装置1309。通信装置1309可以允许电子设备1300与其他设备进行无线或有线通信以交换数据。虽然图13示出了具有各种装置的电子设备1300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1309从网络上被下载和安装,或者从存储器1308被安装,或者从ROM 1302被安装。在该计算机程序被处理器1301执行时,执行本公开实施例的基于虚拟现实空间的显示处理方法中限定的上述功能。
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种基于虚拟现实空间的模型显示方法、装置、设备及介质,充分利用虚拟现实空间的深度信息,实现礼物模型的立体显示效果,在实现虚拟现实空间中进行礼物发送互动的基础上,提升了用户的互动体验。
图14为本公开实施例提供的一种基于虚拟现实空间的模型显示方法的流程示意图,该方法可以由基于虚拟现实空间的模型显示装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图14所示,该方法包括以下步骤:
步骤201,响应于礼物显示指令,生成与礼物显示指令对应的目标礼物模型。
其中,目标礼物模型是一种可在虚拟现实空间中显示的模型,目标礼物模型包括但不限于文字、图片、动画等一种或多种的组合,目标礼物模型可以为2D形式也可以为3D形式等,在此不一一进行列举。
需要说明的是,在本公开的实施例中,礼物显示指令的获取方式在不同的应用场景中不同,示例如下:
在一些可能的示例中,用户可通过操控设备(如手柄设备等)上的预置按钮触发输入礼物显示指令;用户输入礼物显示指令还可存在其他多种方式,相比于使用实体设备按钮进行触发拍摄功能调用的方式,本方式提出无需借助实体设备按钮进行VR
操控的改进方案,可改善由于实体设备按钮容易损坏,进而会容易影响到用户操控的技术问题。
例如,可监测摄像头对用户拍摄的图像信息,然后根据图像信息中的用户手部或用户手持设备(如手柄),判断是否符合显示交互组件模型(用于交互的组件模型,交互组件模型各自预先绑定有交互功能事件)的预设条件,若判定符合显示交互组件模型的预设条件,则在虚拟现实空间中显示至少一交互组件模型,最后通过识别用户手部或用户手持设备的动作信息,执行用户所选的交互组件模型预先绑定的交互功能事件。
例如,可利用摄像头拍摄用户手部图像或用户手持设备图像,并基于图像识别技术对该图像中的用户手部手势或手持设备位置变化进行判断,若判定用户手部或用户手持设备抬起一定幅度,使得在虚拟现实空间中映射的用户虚拟手部或虚拟手持设备进入到用户当前的视角范围内,则可在虚拟现实空间中唤起显示交互组件模型。基于图像识别技术,用户抬起手持设备可唤出如悬浮球形式的交互组件模型,其中,每个悬浮球各自代表一种操控功能,用户可基于悬浮球功能进行交互。例如悬浮球1、2、3、4、5可对应:“礼物1显示”、“礼物2显示”、“礼物3显示”、“更多礼物”、“取消”等交互组件模型。
在唤出如悬浮球形式的交互组件模型后,根据后续监测到的用户手部图像或用户手持设备图像,通过识别用户手部或用户手持设备的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与显示的这些交互组件模型中的目标交互组件模型的空间位置匹配,则确定目标交互组件模型为用户所选的交互组件模型;最后执行目标交互组件模型预先绑定的交互功能事件。
用户可通过左手的手柄抬起来唤起显示如悬浮球形式的交互组件模型,然后通过移动右手的手柄位置选择点击其中的交互组件。在VR设备侧,会根据用户的手柄图像,通过识别右手手柄的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置(例如,可在虚拟现实空间中映射射线轨迹模型,基于射线轨迹模型的轨迹终点位置指示点击标志的空间位置),如果该点击标志的空间位置与“礼物1显示”的交互组件模型的空间位置匹配,则用户选择点击了该“礼物1显示”功能;最后执行该“礼物1显示”的交互组件模型预先绑定的交互功能事件,即触发对礼物1对应的礼物模型的礼物显示指令。
在一些实施例中,还可以在虚拟现实空间中显示包含对应的礼物显示控件模型,
当礼物显示控件模型被触发时则获取到礼物显示指令,其中,礼物显示控件模型被触发的方式可以通过移动右手的手柄位置选择点击来实现,可参照上述实施例。在一些实施例中,为了便于用户执行交互操作,该礼物显示控件模型可以显示在最靠近当前用户观看位置的图层的空间位置区域中。
在一些实施例中,在获取到上述提到的礼物显示指令后,生成与礼物显示指令对应的目标礼物模型。
其中,可确定与礼物显示指令对应的礼物模型渲染信息,该礼物模型渲染信息用于渲染出礼物模型中的礼物图像或者是动画等,进而,获取与礼物显示指令对应的操作对象信息,该操作对象信息包括但不限于用户授权获取的用户昵称、用户头像等,根据物模型渲染信息和操作对象信息,生成与礼物显示指令对应的目标礼物模型,由此,基于目标礼物模型可直观的获知操作对象信息等,提升了用户的交互体验。
在一些实施例中,当礼物显示控件模型被触发时则获取到礼物显示指令的情况下,可显示包含多个候选礼物模型的礼物面板,获取用户在礼物面板中选择的候选礼物模型作为目标礼物模型,其中,选择目标礼物模型的方式可通过移动右手的手柄位置选择点击来实现等。
步骤202,在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同。
在一些实施例中,当前播放的虚拟现实视频对应的多个图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同,因此,可以实现不同的图层信息的层级化显示,在视觉上不同图层类离用户的远近不同,通常用户最先注意到距离其较近的图层上的内容,因此,可根据场景需要,利用上述图层之间的层级结构确定与目标礼物模型对应的目标显示图层。
需要说明的是,在不同的应用场景中,在前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层的方式不同,示例如下:
在一些实施例中,识别目标礼物模型的第一优先等级信息,确定多个图层中每个图层的第二优先等级信息,确定与第一优先等级信息匹配的第二优先等级信息对应的图层为目标显示图层。
即在一些实施例中,可根据目标礼物模型在对应平台上的预设价值信息等确定目标礼物模型的第一优先等级信息,每个图层根据距离当前用户的显示距离的远近设置
第二优先等级信息,确定与第一有限等级信息匹配的第二优先等级信息对应的图层为目标显示图层。在一些实施例中,充分利用虚拟现实空间中的深度信息,针对目标礼物模型在场景中的优先程度确定距离当前用户观看距离显示的远近程度。
在一些实施例中,识别目标礼物模型的礼物类型,在多个图层中确定与礼物类型匹配的图层为目标显示图层。
在一些实施例中,可预先设置多个图层中每个可显示礼物模型的图层对应的可显示礼物类型,在识别得到目标礼物模型的礼物类型之后,根据图层对应的可显示礼物类型确定目标显示图层,在一些实施例中,充分利用虚拟现实空间中的深度信息,针对目标礼物模型的礼物类型确定距离当前用户观看距离显示的远近程度。比如,在多个图层中包含全屏礼物图层时,若是目标礼物模型的礼物类型为全屏礼物,则在全屏礼物图层中显示对应的目标礼物模型。
步骤203,在目标显示图层对应的目标空间位置区域上显示目标礼物模型。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供的基于虚拟现实空间的模型显示方案,响应于礼物显示指令,生成与礼物显示指令对应的目标礼物模型,在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同,进而,在目标显示图层对应的目标空间位置区域上显示目标礼物模型。由此,充分利用虚拟现实空间的深度信息,实现礼物模型的立体显示效果,在实现虚拟现实空间中进行礼物发送互动的基础上,提升了用户的互动体验。
在一些实施例中,在确定目标显示图层之后,在目标显示图层对应的目标空间位置区域上显示目标礼物模型,以实现在虚拟显示控件中礼物的发送互动,在模拟真实世界中的礼物发送互动的基础上,还充分利用了虚拟现实空间中的深度信息,提升了交互体验。
需要说明的是,在目标显示图层对应的目标空间位置区域上显示目标礼物模型的方式,包括但不限于以下实施例中提到的至少一种:
在一些实施例中,确定目标礼物模型的显示路径,根据显示路径控制目标礼物模型在目标空间位置区域上显示,其中,显示路径位于目标空间位置区域内,显示路径可根据场景需要标定。
举例而言,如图15所示,当目标礼物模型如图中所示时,则在对应的目标空间显
示区域中,从右侧向左侧滑入显示,并且在滑入路径长度等于预设长度阈值时停止移动,在对应的位置固定显示。
在一些实施例中,确定目标礼物模型的显示时长,其中,显示时长可以为预设的,也可以为根据场景设置的规则确定的,比如,当场景设置的规则为根据当前显示的目标礼物模型的数量确定显示时长,目标礼物的数量越多,为了提升播放的氛围感,则对应的显示时长越长等。例如,根据显示时长控制目标礼物模型在目标空间位置区域上显示中显示,在显示时长达到预设显示时长后,停止显示目标礼物模型。
综上,本公开实施例的基于虚拟现实空间的模型显示方法,响应于礼物显示指令,生成与礼物显示指令对应的目标礼物模型,在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同,进而,在目标显示图层对应的目标空间位置区域上显示目标礼物模型。由此,充分利用虚拟现实空间的深度信息,实现礼物模型的立体显示效果,在实现虚拟现实空间中进行礼物发送互动的基础上,提升了用户的互动体验。
基于上述实施例,可以理解,在现实世界中,直播视频流在进行播放时,在视频界面显示各种各样的信息,比如,可能会显示弹幕信息、显示礼物信息等。在一些实施例中,充分利用虚拟现实空间中的深度信息,将不同的信息根据场景需求拆分在不同的图层中显示,以实现层级显示效果。
因此,在一些实施例中,如图16所示,在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层之前,还包括:
步骤401,获取与当前播放的虚拟现实视频对应的多个图层的图层类型。
步骤402,根据图层类型确定多个图层在虚拟现实空间中的多个目标空间位置区域,其中,各个目标空间位置区域与用户当前观看位置的显示距离不同。
在一些实施例中,获取与当前播放的虚拟现实视频对应的多个图层的图层类型,其中,图层类型可根据场景标定,包括但不限于操作用户界面图层、信息流显示图层、礼物显示图层、表情显示图层等中的多种,其中,每个图层类型下可包括至少一种子图层等,比如对于操作用户界面图层而言可包括操控版图层等,在此不再一一列举。
进而,根据图层类型确定多个图层在虚拟现实空间中的多个空间位置区域,各个空间位置区域与用户当前观看位置的显示距离不同,由此,在视觉上不同图层类型距离用户的远近不同,通常用户最先注意到距离其较近的图层上的内容,因此,可根据
场景需要,利用上述图层之间的层级结构显示不用的信息,提升用户的观看体验。
需要说明的是,上述多个图层对应的各个空间位置区域之间在垂直轴上的垂直坐标值不同,实现用户视觉上的“远近”错落显示,另外,为了避免图层之间的显示遮挡,各个空间位置区域在水平坐标值上对应的坐标值可以不完全相同(即前后图层之间包含在X轴方向上不重叠的图层部分),即在水平方向上“错落”排布多个图层,或者是,各个空间位置区域在竖直坐标值上对应的坐标值可以不完全相同(即前后图层之间包含在Y轴方向上不重叠的图层部分),即在竖直方向上“错落”排布多个图层,或者,也可以同时在水平方向和竖直方向上“错落”排布多个图层等。
举例而言,如图17所示,当应用场景为直播视频流的播放场景,则在当前播放的虚拟视频帧的前方依次为对应的图层的空间位置显示区域,不同图层之间距离用户当前观看位置的显示距离不同,继续参照图17,为了避免图层之间的前后遮挡,多个图层所对应的空间位置区域的水平坐标值不完全相同,且,多个图层所对应的空间位置区域的竖直坐标值不完全相同,用户在虚拟现实空间中观看时,显示距离较近的图层显然距离用户的人眼较近,用户更容易注意到,比如,利用这种层级结构关系将一些操作界面类型的图层设置在最靠近用户人眼的位置,显然能够更加方便用户的操作等。因此,在一些实施例中,显示目标礼物模型的目标显示图层对应的目标空间显示区域距离用户人眼的距离,与目标礼物模型需要引起用户的注意力的程度适配。
需要强调的是,这种图层之间的层级结构对于观看用户来说,仅仅对不同图层的注意敏感度不同,在视觉上观看用户并不能直观的看到不同图层的空间位置区域之间的层级结构关系,只能感受到不同图层之间内容上的远近等,由于用户倾向于关注更靠近自己的信息,因此,提升了对最靠近用户的图层上信息的注意力。
其中,为了充分利用虚拟显示空间中的深度信息,还可以基于场景需要确定每个图层的图层优先级,基于优先级的高低来确定不同图层的空间位置区域,按照目标礼物模型的重要性等指标,将目标礼物模型划分到适配的目标显示图层来显示,进一步提升了用户的观看体验。
在一些实施例中,根据图层类型确定多个图层在虚拟现实空间中的多个空间位置区域,包括:根据多个图层类型确定多个图层的多个第二优先级信息,进而,根据多个第二优先级信息确定多个图层在虚拟现实空间中的多个空间位置区域。
其中,第二优先级信息越高,则对应的图层中显示的图层消息越需要引起用户的注意力,因此,对应的目标空间区域与用户观看位置的距离越近。图层类型对应的优
先级信息可根据场景需要标定,同样的图层类型在不同的场景中对应的优先级信息可能不同等。
需要说明的是,在实际执行过程中,在不同的应用场景中,根据第二优先级信息确定多个图层在虚拟现实空间中的多个空间位置区域的方式不同,示例说明如下:
在一些实施例中,考虑到对应的图层的设置视频显示位置的前方,因此,在一些实施例中,确定虚拟现实视频在虚拟现实空间中的视频显示位置,该视频显示位置可以根据搭建好的显示视频的画布位置确定,进而,以视频显示位置为起点,根据预设距离间隔和第二优先级信息由低到高的顺序向靠近用户当前观看位置的方向,逐个确定每个图层的空间位置区域,以确定多个图层的多个空间位置区域。其中,预设距离间隔可根据实验数据标定。
在一些实施例中,考虑到对应的图层的设置视频显示位置观看用户的用户视线方向,因此,在一些实施例中,根据第二优先级信息确定最高优先级对应的目标图层,确定目标图层在虚拟现实空间中的空间位置区域,其中,目标图层在虚拟现实空间中的空间位置区域的确定方式在不同的应用场景中不同;
在一些可能的实现方式中,可确定距离在用户视线方向距离用户当前观看位置预设距离阈值处为空间位置区域;在一些可能的实现方式中,可确定用户当前观看位置和虚拟现实视频在虚拟现实空间中的视频显示位置之间的总距离,以视频显示位置为起点,确定总距离的预设比例阈值处为空间位置区域,由此避免显示的图层距离观看用户过近,使得用户的观看视角范围内观看到的信息有限。
例如,在确定出目标图层的空间位置区域后,以空间位置区域为起点,根据预设距离间隔和第二优先级信息由高到低的顺序向远离用户当前观看位置的方向,逐个确定其他每个图层的空间位置区域,以确定多个图层的多个空间位置区域。其中,预设距离间隔可以和上述实施例中的预设距离间隔相同,也可以不同,可根据场景需要设置。
在一些实施例中,考虑到虚拟现实空间的空间特点,如图11所示,为本公开实施例提供的另一种多个图层之间的层级结构示意图。每个图层为圆弧形区域(图中的非阴影区域),圆弧形区域位于观看用户的视场角范围内,每个图层对应的空间位置区域位于对应的圆弧形区域上(图中示出两个图层),每个图层对应的圆弧形区域的圆心位置位于用户当前观看位置对应的用户视线方向上。
从而,在一些实施例中,可根据多个图层类型确定每个图层与用户当前观看位置
对应的显示距离,比如根据图层类型对应的第二优先级信息确定每个图层与用户当前观看位置对应的显示距离,第二优先级越高,则对应图层与用户观看位置对应的显示距离越小,其中,显示距离的确定方式可参照上述实施例中空间位置区域所在位置的确定方式,进而,以用户当前观看位置为圆心,并以对应的显示距离为半径,向用户视线方向延伸以确定每个图层的圆弧形区域,其中,圆弧形区域对应的范围与用户视场角有关,确定圆弧形区域为对应图层的空间位置区域。
在实际执行过程中,目标显示图层是在对应场景下与目标礼物模型最适配的图层,举例而言,如图9所示,为本公开实施例提供的另一种多个图层之间的层级结构示意图。若是多个图层对应的图层类型包括:操作用户界面图层(包括操控版图层)、信息流图层(包括公屏图层(用于显示评论信息等))、礼物显示图层(包括托盘图层(用于显示托盘礼物)、主客态礼物图层(图中为半屏礼物图层和全屏礼物图层,用于显示当前直播视频的任意观看用户发送的全屏礼物或者是半屏礼物等)、客态礼物图层(用于显示其他观看用户发送的飞行类礼物等、包括主态发射图层(用于显示当前观看用户发射的相关礼物等)、表情显示图层(包括主态发射图层(用于显示当前观看用户发射的相关表情等))、客态表情图层类型(用于显示其他观看用户发送的表情等)),则图中的多个图层类型的优先级信息为操控图层=公屏图层>主态发射图层>托盘图层>全屏礼物图层=半屏礼物图层>客态礼物图层>客态表情图层,多个图层对应的多个空间位置区域距离观看用户的显示距离不同,优先级越高的图层类型的空间位置区域越靠近观看用户。
在目标礼物模型为托盘礼物的情况下,则目标显示图层为托盘图层类型对应的图层,在目标礼物模型为全屏礼物的情况下,则目标显示图层为礼物图层类型对应的图层等。
综上,本公开实施例的基于虚拟现实空间的基于虚拟现实空间的模型显示方法,设置与虚拟现实视频对应的多个图层,不同图层的空间位置区域与用户当前观看位置的显示距离不同,通过结构化的图层设置,满足了对不同图层信息的层级显示,充分利用了虚拟现实空间中的深度信息,实现了目标礼物模型的立体化的显示效果。
为了实现上述实施例,本公开还提出了一种于虚拟现实空间的模型显示装置。
图18为本公开实施例提供的一种基于虚拟现实空间的模型显示装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中进行基于虚拟现实空间的模型显示。如图18所示,该装置包括:生成模块810、确定模块820、显示模块830,
其中,
生成模块810,用于响应于礼物显示指令,生成与礼物显示指令对应的目标礼物模型;
确定模块820,用于在当前播放的虚拟现实视频对应的多个图层中,确定与目标礼物模型对应的目标显示图层,其中,各个图层对应的空间位置区域与用户当前观看位置的显示距离不同;
显示模块830,用于在目标显示图层对应的目标空间位置区域上显示目标礼物模型。
本公开实施例所提供的基于虚拟现实空间的模型显示装置可执行本公开任意实施例所提供的基于虚拟现实空间的模型显示方法,具备执行方法相应的功能模块和有益效果,其实现原理类似,在此不再赘述。
为了实现上述实施例,本公开还提出一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述实施例中的基于虚拟现实空间的模型显示方法。
图13为本公开实施例提供的一种电子设备的结构示意图。
下面参考图13,其示出了适于用来实现本公开实施例中的电子设备1300的结构示意图。本公开实施例中的电子设备1300可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图13所示,电子设备1300可以包括处理器(例如中央处理器、图形处理器等)1301,其可以根据存储在只读存储器(ROM)1302中的程序或者从存储器908加载到随机访问存储器(RAM)1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有电子设备1300操作所需的各种程序和数据。处理器1301、ROM 1302以及RAM 1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线1304。
通常,以下装置可以连接至I/O接口1305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储器1308;以及通信
装置1309。通信装置1309可以允许电子设备1300与其他设备进行无线或有线通信以交换数据。虽然图13示出了具有各种装置的电子设备1300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
例如,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1309从网络上被下载和安装,或者从存储器1308被安装,或者从ROM 1302被安装。在该计算机程序被处理器1301执行时,执行本公开实施例的基于虚拟现实空间的模型显示方法中限定的上述功能。
在虚拟直播场景下呈现虚拟礼物的样式比较单一,通常由观众通过发射光标射线,在礼物面板内选中某一虚拟礼物,并触控手柄Trigger键来将该虚拟礼物发送给主播,缺乏一定的互动趣味性。
本公开提供一种人机交互方法、装置、设备和存储介质,实现虚拟空间内用户对于虚拟礼物的多样化互动,增强虚拟空间内礼物互动的趣味性,调动虚拟空间内的直播积极性。
为了避免在虚拟空间内通过手柄光标向主播赠送礼物时缺乏一定的互动趣味性的问题,本公开的发明构思是:响应于虚拟空间内礼物入口的触发操作,在虚拟空间内呈现一个处于用户身边的近身礼物面板,以支持用户通过手部模型对近身礼物面板内的任一虚拟礼物执行相关赠送操作。所以,响应于手部模型面向任一虚拟礼物的赠送操作,即可在虚拟空间内呈现该虚拟礼物的赠送特效,从而增加用户在虚拟空间内赠送虚拟礼物时的互动操作,实现虚拟空间内虚拟礼物的多样化互动,增强虚拟空间内虚拟礼物的互动趣味性和用户互动氛围。
图19为本公开实施例提供的一种人机交互方法的流程图,该方法可以应用于XR设备中,但不限于此。该方法可以由本公开提供的人机交互装置来执行,其中,人机交互装置可以通过任意的软件和/或硬件的方式实现。示例性地,人机交互装置可配置于AR/VR/MR等能够模拟虚拟场景的电子设备,本公开对电子设备的具体类型不作任何限制。
例如,如图19所示,该方法可以包括如下步骤:
S110,响应于虚拟空间内礼物入口的触发操作,在虚拟空间内呈现对应的近身礼
物面板。
在一些实施例中,虚拟空间可以为XR设备针对任一用户选择的某一直播场景,模拟出的相应虚拟环境,以便在虚拟空间内显示相应的直播互动信息。例如,支持主播选中某一类型的直播场景,来构建相应的虚拟直播环境,作为本公开中的虚拟空间,使得各个观众进入到该虚拟空间内来实现相应的直播互动。
其中,在虚拟空间内可以针对不同的直播功能,分别设定有直播屏、操控屏和公屏等多个虚拟屏幕,以分别显示不同的直播内容。如图20所示,在直播屏内可以显示主播端的直播视频流,以便用户观看相应的直播画面。在操控屏内可以显示主播信息、在线观众信息、相关直播推荐列表和当前直播的清晰度选项等信息,以便于用户执行相关的各项直播操作等。在公屏内可以显示当前直播内的各项用户评论信息、点赞、礼物赠送等,以便用户对当前直播进行相关管理。
应当理解的是,直播屏、操控屏和公屏均面向用户,在虚拟空间内的不同位置显示。而且,可以通过对任一虚拟屏幕的位置和样式进行调整,以防止遮挡其他虚拟屏幕。
通常情况下,针对虚拟空间内用户与主播间对于虚拟礼物的多样化互动,如图20所示,会在虚拟空间内显示有对应的礼物入口。某一用户想要向主播赠送虚拟礼物时,首先会触发该礼物入口,例如通过手柄光标选中该礼物入口,或者控制手部模型点击该礼物入口等。
所以,为了增加用户在虚拟空间内赠送虚拟礼物时的互动操作,避免通过手柄光标和Trigger键来赠送礼物时的单一互动,本公开在检测到该礼物入口的触发操作后,会在虚拟空间内呈现一个处于用户身边的近身礼物面板。而且,在虚拟空间内也会显示相应的手部模型,来控制手部模型对于虚拟空间内任一虚拟对象执行相应的互动操作。
那么,近身礼物面板处于用户身边,也就是用户的手部模型可以触及的范围内,以支持用户通过真实手柄操作或者手势操作等,在虚拟空间内控制手部模型面向近身礼物面板内的任一虚拟礼物直接执行近距离下相关的赠送操作,无需采用光标射线执行远距离下相关的赠送操作。例如,通过手部模型对于任一虚拟礼物执行相关的触碰、抓握、投掷、发射、放回面板等一系列交互操作。
在一些实施例中,对于近身礼物面板,可以在虚拟现实空间内的预定位置处呈现对应的近身礼物面板。其中,该预定位置与用户间的距离满足预定距离要求,使得近
身礼物面板处于用户身边,也就是用户的手部模型可以触及的范围内,以支持用户通过真实手柄操作或者手势操作等。
示例性的,本公开中的近身礼物面板可以水平呈现用户周边,大约处于用户身前一小段距离的位置处,例如40-45公分处,且水平位置大约与用户手肘齐平。
应当理解的是,近身礼物面板内存在相应的用户界面(User Interface,简称为UI)控件时,也支持通过手部模型面向任一UI控件执行相关的互动操作。例如,通过手部模型对于任一UI控件可以执行相关的触碰、点击、按压、抬起等一系列交互操作。其中,UI控件可以用于对近身礼物面板执行相关的面板操作,例如面板翻页、面板收起等。
此外,考虑到近身礼物面板的整体大小有限,而近身礼物面板内显示的虚拟礼物数量可能较多。所以,为了在近身礼物面板内显示更多更全面的虚拟礼物,本公开可以为近身礼物面板设定相应的翻页功能。其中,近身礼物面板支持的翻页操作可以包括但不限如下三种情形:
情形一,手部模型面向近身礼物面板的拖曳翻页操作。
如图21(A)所示,考虑到手部模型处于用户身边,例如可以环绕在用户周边呈现。所以,本公开可以通过手部模型触碰该近身礼物面板,来选中近身礼物面板。其中,手部模型在触碰到近身礼物面板时,可以控制近身礼物面板高亮显示,并控制XR设备中的真实手柄进行震动,以提示用户该近身礼物面板已被选中。然后,用户通过操控真实手柄中的握紧(Grip)键,并执行相应的拖曳运动,即可控制手部模型对近身礼物面板执行拖曳翻页,而生成该近身礼物面板的拖曳翻页操作。
情形二,手部模型面向近身礼物面板内翻页控件的触发操作。
如图21(B)所示,本公开可以在近身礼物面板内设定相应的翻页控件,该翻页控件可以包括上一页控件和下一页控件两种。通过手部模型点击近身礼物面板内的翻页控件,即可生成该翻页控件的触发操作,作为近身礼物面板的翻页操作。
情形三,手部模型面向手柄模型内摇杆组件的拨动操作。
为了确保虚拟空间内用户面向任一虚拟对象的多样化互动,本公开除了在虚拟空间内显示相应的手部模型外,还会显示相应的手柄模型。然后,通过手部模型和手柄模型的不同特性,在虚拟空间内执行不同的互动操作。那么,如图21(C)所示,通过手部模型握持有手柄模型,然后左右拨动手柄模型内的摇杆组件,即可生成近身礼物面板的翻页操作。
由上述内容可知,在近身礼物面板内呈现更多的虚拟礼物时,可以通过手部模型对近身礼物面板执行上述其中一种翻页操作,即可生成该近身礼物面板的翻页操作。然后,响应于该近身礼物面板的翻页操作,可以控制该近身礼物面板进行翻页,也就是在虚拟空间内显示近身礼物面板的翻页特效。而且,跟随该近身礼物面板的翻页,可以在近身礼物面板内呈现新的虚拟礼物,并收起部分已呈现过的虚拟礼物。
S120,响应于虚拟空间内手部模型面向近身礼物面板内任一虚拟礼物的赠送操作,在虚拟空间内呈现虚拟礼物的赠送特效。
通过本公开技术方案,响应于虚拟空间内礼物入口的触发操作,可以在虚拟空间内呈现对应的近身礼物面板,该近身礼物面板处于用户身边,支持用户通过手部模型来对近身礼物面板内的任一虚拟礼物执行相关赠送操作。所以,响应于手部模型面向任一虚拟礼物的赠送操作,即可在虚拟空间内呈现该虚拟礼物的赠送特效,无需通过光标射线选中某一虚拟礼物后采用手柄Trigger键来呈现对应的礼物赠送特效,从而增加用户在虚拟空间内赠送虚拟礼物时的互动操作,实现虚拟空间内虚拟礼物的多样化互动,增强虚拟空间内虚拟礼物的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。
在虚拟空间内呈现出近身礼物面板后,可以通过真实手柄操作或手势操作等,来控制手部模型面向近身礼物面板内的任一虚拟礼物执行近距离下的相关运动,以选中任一虚拟礼物来共同执行一些指示礼物赠送相关的各种运动,从而模拟出实际的礼物赠送过程,增强虚拟空间内礼物赠送的多样化互动。
所以,本公开可以实时检测手部模型面向任一虚拟礼物所执行的各种运动信息,以判断用户是否通过手部模型对该虚拟礼物执行相应的赠送操作。在检测到手部模型面向任一虚拟礼物的赠送操作后,说明用户当前需要将该虚拟礼物赠送给主播。所以,本公开可以在虚拟空间内呈现虚拟礼物的赠送特效,该赠送特效可以为通过手部模型将虚拟礼物投掷到虚拟空间内的投掷效果,也可以为通过礼物赠送道具将虚拟礼物发出到虚拟空间内的道具发出效果,本公开对此不作限定。
而且,在虚拟空间内呈现虚拟礼物的赠送特效时,考虑到虚拟礼物属于在三维空间内的投掷赠送,存在相应的空间投掷轨迹。所以虚拟礼物的赠送特效可以包括但不限于:虚拟礼物在虚拟空间内的空间投掷轨迹,以及基于空间投掷轨迹或/和虚拟礼物而设定的投掷特效。该投掷特效可以为对虚拟礼物在虚拟空间内完成投掷后在最终的落地点所展示的一个动画效果。
在一些实施例中,考虑到用户在虚拟空间内面向直播屏,来观看主播的直播视频流时,通常会存在相应的可视区域和盲区。虚拟空间内可能还会存在一些虚拟对象,例如操控屏、公屏、礼物面板等,还会遮挡用户对于直播视频流的观看视线。而且,对于虚拟空间内的礼物赠送特效,通常需要呈现在用户和主播的直播视频流(也就是直播屏)之间,使得用户能够观看到礼物赠送特效,以便用户判断虚拟礼物是否成功赠送。如果某一虚拟礼物在虚拟空间内赠送后,却无法使用户观看到该虚拟礼物的赠送特效,也就无法保证用户与主播间对于该虚拟礼物的互动。
所以,为了确保用户能够观看到在虚拟空间内赠送任一虚拟礼物后呈现出的互动效果,本公开可以根据用户在虚拟空间内的所处位置和直播屏的位置,在虚拟空间内预先划定出一个礼物赠送安全区域,使得对于投掷后落于该礼物赠送安全区域内的虚拟礼物,能够确保用户观看到对应的赠送特效。
进而,在虚拟空间内呈现虚拟礼物的赠送特效时,首先需要确定该虚拟空间内所设定的礼物赠送安全区域。然后,根据礼物赠送安全区域,在虚拟空间内呈现虚拟礼物的赠送特效。
也就是说,通过判断虚拟礼物在虚拟空间内的投掷位置是否处于该礼物赠送安全区域内,来判断本次礼物赠送是否能够成功执行。进而,在确定虚拟礼物能够被投掷到虚拟空间内的礼物赠送安全区域时,表示本次礼物赠送能够成功执行,即可在虚拟空间内呈现该虚拟礼物的赠送特效。
而且,虚拟空间内针对不同的直播功能特性,会存在近身礼物面板、礼物赠送安全区域和直播内容显示区域等不同的功能区域。其中,直播内容显示区域用于显示各类直播相关内容,例如直播视频流、直播评论、直播列表、主播信息等。也就是,直播内容显示区域可以为上述提及的直播屏、公屏、操控屏等虚拟屏幕所在的区域。
那么,为了确保用户在虚拟空间内不同功能区域间的信息互不遮挡,本公开可以设定虚拟空间内的直播内容显示区域、礼物赠送安全区域和近身礼物面板处于不同的空间平面内。而且,对直播内容显示区域、礼物赠送安全区域和近身礼物面板可以分别设定对应的距离间隔和区域大小。进而,按照预设距离和区域大小,在虚拟空间内依次分布直播内容显示区域、礼物赠送安全区域和近身礼物面板,使得直播内容显示区域、礼物赠送安全区域和近身礼物面板在虚拟空间内能够存在相对独立且互不影响的所处位置。此外,为了保证虚拟空间内礼物赠送时的互动趣味性,本公开在通过手部模型与虚拟空间内的任一虚拟对象进行交互时,可以根据手部模型面向该虚拟对象
执行的不同交互操作,来控制XR设备执行不同程度的震动。例如,通过手部模型悬停在某一虚拟对象上时,控制XR设备(例如真实手柄)执行轻度的震动。而通过手部模型点击该虚拟对象时,可以控制XR设备执行较大强度的震动。其中,与手部模型进行交互的虚拟对象可以为礼物入口、近身礼物面板、近身礼物面板内的各个虚拟礼物或者相关用户交互控件等。
以近身礼物面板内的任一虚拟礼物为例,通过手部模型悬停在该虚拟礼物上时,可以控制XR设备执行轻度的震动,在通过手部模型握持该虚拟礼物时,可以控制XR设备执行较大强度的震动。
本公开实施例提供的技术方案,响应于虚拟空间内礼物入口的触发操作,可以在虚拟空间内呈现对应的近身礼物面板,该近身礼物面板处于用户身边,支持用户通过手部模型来对近身礼物面板内的任一虚拟礼物执行相关赠送操作。所以,响应于手部模型面向任一虚拟礼物的赠送操作,即可在虚拟空间内呈现该虚拟礼物的赠送特效,无需通过光标射线选中某一虚拟礼物后采用手柄Trigger键来呈现对应的礼物赠送特效,从而增加用户在虚拟空间内赠送虚拟礼物时的互动操作,实现虚拟空间内虚拟礼物的多样化互动,增强虚拟空间内虚拟礼物的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。
在一些实施例中,为了确保虚拟空间内近身礼物面板呈现的多样化触发,本公开可以在虚拟空间内设定一个远身礼物入口,该远身礼物入口可以位于虚拟空间内的任一虚拟屏幕内,例如远身礼物入口可以位于公屏内的某一位置处。此时,远身礼物入口与用户距离较远,可以支持采用光标射线来进行远距离下的触发互动。
或者,还可以在虚拟空间内设定一个近身礼物入口,该近身礼物入口可以虚拟空间内的用户近身位置处,例如近身礼物入口可以位于用户身体右前方的一小段距离下,例如70公分左右,且该近身礼物入口可以与用户手肘齐平,使其在手部模型可触及的范围内。此时,近身礼物入口与用户距离较近,可以支持通过手部模型来执行近距离下的触发互动。
例如,将在远身礼物入口和近身礼物入口这两种情况下,分别对在虚拟空间内呈现近身礼物面板的过程进行说明。
下面先对通过触发远身礼物入口,在虚拟空间内呈现近身礼物面板的过程进行阐述:
针对虚拟空间内任一虚拟屏幕上设定的远身礼物入口,如图22所示,可以通过手柄模型或者手部模型向该远身礼物入口发出相应的光标射线。从而,采用该光标射线来选择该远身礼物入口,然后通过触控真实手柄中的Trigger键来确认选中该远身礼物入口,生成该远身礼物入口的光标选中操作。进而,响应于该远身礼物入口的光标选中操作,在虚拟空间内呈现对应的近身礼物面板。也就是,在检测到作用于该远身礼物入口上的光标选中操作后,说明虚拟空间内当前存在用户与主播间对于虚拟礼物的赠送互动,所以在虚拟空间内可以呈现一个处于用户身边的近身礼物面板,以便通过手部模型面向该近身礼物面板内的任一虚拟礼物执行相应的赠送操作。
然后,在完成近身礼物面板内任一虚拟礼物的赠送过程,而不再需要执行礼物互动时,本公开需要在虚拟空间内收起所呈现的近身礼物面板。所以,为了确保近身礼物面板的便捷收起,而近身礼物面板是通过触发远身礼物入口而呈现的,那么本公开可以在近身礼物面板内设定一个收起控件。如图22所示,通过手部模型点击近身礼物面板内的收起控件,表示需要从虚拟空间内收起所呈现的近身礼物面板。
因此,在近身礼物面板的呈现过程中,本公开会实时检测通过手部模型面向该近身礼物面板内收起控件所执行的触发操作。响应于手部模型面向该收起控件的触发操作,可以在虚拟空间内收起所呈现的近身礼物面板。
而且,为了保证近身礼物面板在呈现和收起时的多样化互动,本公开在通过触发远身礼物入口,而在虚拟空间内呈现近身礼物面板,或者收起近身礼物面板时,还可以在虚拟空间内播放该近身礼物面板的远身呈现特效或远身收起特效。该远身呈现特效可以为相应的远身呈现动画和/或音效等,远身收起特效可以为与远身呈现特效相反的一种远身收起动画和/或音效等。
下面在对通过触发近身礼物入口,在虚拟空间内呈现近身礼物面板的过程进行阐述:
针对虚拟空间内用户近身位置处设定的近身礼物入口,如图23所示,在刚进入虚拟空间内时,该近身礼物入口默认为第一虚影状态,该第一虚影状态可以为指示近身礼物面板并未呈现的虚影样式,以避免遮挡用户观看主播的直播内容。
在虚拟空间内存在礼物互动需求时,由于近身礼物入口处于用户近身位置处,在手部模型的可触及范围内。所以,本公开可以控制手部模型悬停在该近身礼物入口上方,以激活该近身礼物入口。那么,如图23所示,响应于手部模型面向该近身礼物入口的悬停操作,可以控制该近身礼物入口从第一虚影状态变换为激活状态。示例性的,
近身礼物入口在激活状态下可以进行相应的发光和放大,使得近身礼物入口的图标可以由虚影不断变实。而且,本公开还可以控制XR设备的真实手柄进行相应震动。
然后,用户通过触控真实手柄中的Trigger键来确认通过手部模型选中该近身礼物入口,即可生成手部模型面向该近身礼物入口的悬停确认操作。进而,响应于该手部模型面向该近身礼物入口的悬停确认操作,可以在虚拟空间内呈现对应的近身礼物面板。也就是,在检测到手部模型面向该近身礼物入口的悬停确认操作后,说明虚拟空间内当前存在用户与主播间对于虚拟礼物的赠送互动,所以在虚拟空间内可以呈现一个处于用户身边的近身礼物面板,以便通过手部模型面向该近身礼物面板内的任一虚拟礼物执行相应的赠送操作。
而且,在虚拟空间内呈现出近身礼物面板后,为了避免近身礼物入口在激活状态下影响到用户对近身礼物面板内任一虚拟礼物的赠送,如图23所示,本公开还可以控制近身礼物入口从激活状态进一步变换为第二虚影状态。该第二虚影状态可以为将近身礼物入口的图标变换为指示谨慎礼物面板已呈现的虚影样式。例如,假设近身礼物入口的图标通过虚拟礼物盒来表示,那么第一虚影状态可以为关闭的虚拟礼物盒,而第二虚影状态可以为开启的虚拟礼物盒。
在完成近身礼物面板内任一虚拟礼物的赠送过程,而不再需要执行礼物互动时,本公开需要在虚拟空间内收起所呈现的近身礼物面板。所以,为了确保近身礼物面板的便捷收起,而近身礼物面板是通过触发位于用户近身位置处的近身礼物入口而呈现的,那么本公开可以通过手部模型再次触发该近身礼物入口,来指示在虚拟空间内收起所呈现的近身礼物面板。
如图23所示,通过手部模型再次悬停在处于第二虚影状态下的近身礼物入口上方,以再次激活该近身礼物入口。那么,示例性的,该近身礼物入口再次进行相应的发光和放大,使得近身礼物入口的图标可以继续由虚影不断变实。而且,本公开仍可以控制XR设备的真实手柄进行相应震动。
然后,用户通过触控真实手柄中的Trigger键来确认通过手部模型再次选中该近身礼物入口,即可生成手部模型面向该近身礼物入口的再次悬停确认操作。进而,响应于该手部模型面向该近身礼物入口的再次悬停确认操作,可以在虚拟空间内收起所呈现的近身礼物面板。
而且,在虚拟空间内收起近身礼物面板后,可以控制该近身礼物入口变换回第一虚影状态,以以便后续再次呈现近身礼物面板。
而且,为了保证近身礼物面板在呈现和收起时的多样化互动,本公开在通过触发近身礼物入口,而在虚拟空间内呈现近身礼物面板,或者收起近身礼物面板时,还可以在虚拟空间内播放该近身礼物面板的近身呈现特效或近身收起特效。该远身呈现特效可以为相应的近身呈现动画和/或音效等,近身收起特效可以为与近身呈现特效相反的一种近身收起动画和/或音效等。
示例性的,假设近身礼物入口的图标为一个虚拟礼物盒,那么近身呈现特效可以为通过手部模型触发虚拟礼物盒后,该虚拟礼物盒会由虚变实,并被打开,使得从虚拟礼物盒中飞出一道光束。然后,虚拟礼物盒中飞出的光束可以逐渐在用户身边聚合成近身礼物面板的形状,并随之逐渐显示出各个虚拟礼物。进而,在近身呈现特效播放完成后,近身礼物面板可以稳定呈现在用户身边,且在近身礼物面板上显示相应的虚拟礼物。而且,虚拟礼物盒可以保持被打开的样式,逐渐由实变虚。
在收起近身礼物面板时,通过手部模型再次触发被打开的虚拟礼物盒,使得虚拟礼物盒再次发光并放大,且逐渐由虚变实。然后,控制近身礼物面板化成一道光束,从用户身边飞回到虚拟礼物盒内,且虚拟礼物跟随近身礼物面板逐渐消失。进而,在光束飞回到虚拟礼物盒后,控制虚拟礼物盒被关闭,且逐渐由实变虚,而变换回默认的第一虚影状态。
根据本公开的一个或多个实施例,为了确保用户对于近身礼物面板内任一虚拟礼物进行赠送时的便捷使用,本公开会为近身礼物面板设定一个体验模式,以在虚拟空间内呈现近身礼物面板时,引导用户在体验模型下对于近身礼物面板内的虚拟礼物执行一次完整的礼物赠送流程,从而提升用户对于近身礼物面板的了解。
如图24所示,在虚拟空间内呈现近身礼物面板的过程可以包括如下步骤:
S610,如果近身礼物面板在虚拟空间内的本次呈现为预设次数内的呈现,则在虚拟空间内呈现近身礼物面板在体验模式下的面板模板。
考虑到用户在虚拟空间内前几次(例如前两次)呈现近身礼物面板时,可能会对近身礼物面板的礼物赠送功能不了解,需要引导用户在体验模式下执行一次完整的礼物赠送流程,来增强用户对近身礼物面板的了解程度。而在后续重新呈现近身礼物面板(例如从第三次呈现近身礼物面板开始)时,则无需再引导用户在体验模式下执行完整的礼物赠送流程。
所以,本公开可以针对近身礼物面板在体验模式下的引导需求,为近身礼物面板
的呈现次数设定一个次数限定,也就是本公开中的预设次数。例如,该预设次数可以为2次,本公开对此不作限定。在虚拟空间内每次呈现近身礼物面板时,会记录相应的呈现次数,来判断近身礼物面板的本次呈现是否为预设次数内的呈现,以判断是否需要引导用户体验一次礼物赠送流程。
如果近身礼物面板在虚拟空间内的本次呈现为预设次数内的呈现,说明本次呈现需要引导用户体验一次完整的礼物赠送流程,以便用户准确了解近身礼物面板支持的礼物赠送功能。那么,可以在虚拟空间内首先呈现出该近身礼物面板在体验模式下的面板模板,以使用户进入到体验模式下。
其中,如图25所示,面板模板与近身礼物面板的样式相同,均呈现在用户身边。但是,考虑到面板模板主要目的是引导用户执行一次完整的礼物赠送流程,而对所赠送的虚拟礼物并不作限定。所以,面板模板内可以存在一个或少量的虚拟礼物,作为相应的体验礼物,而不会如近身礼物面板一样呈现全部的虚拟礼物。
应当理解的是,用户在体验模式下执行的礼物赠送流程并非用户执行的真实礼物赠送。所以,在体验模式下,会取消体验礼物的收费展示和扣费入口,以使用户在体验模式下执行礼物赠送流程时,并不会进行相应的扣费操作。
S620,根据体验模式下的礼物赠送引导信息,控制手部模型对面板模板内的体验礼物执行相应的赠送操作。
由于完整的礼物赠送流程中会存在多个礼物赠送步骤,所以本公开用户在体验模式下执行的每一步赠送操作,均会呈现出对应的礼物赠送引导信息。然后,按照每一步下的礼物赠送引导信息,可以控制手部模型对该面板模板内的体验礼物执行相应的赠送操作。在完成本步赠送操作后,继续呈现下一步的礼物赠送引导信息,继续控制手部模型执行相应的赠送操作。依次循环,支持用户在体验模型下执行完一次完整的礼物赠送流程。
作为本公开中的一种示例性方案,如图25所示,用户刚进入到体验模式下时,会首次呈现一条礼物赠送引导信息为“当前为体验模式,该模式下赠送礼物不收费”,以提示用户在体验模式下的礼物赠送并不扣费。该条礼物赠送引导信息在持续呈现3秒后自动消失。然后,如果用户在上一条引导信息消失相应时间段后,仍未执行当前的相关礼物赠送操作,那么会在虚拟空间内继续呈现一条指示用户执行当前的相关礼物赠送操作的引导信息,该礼物赠送引导信息可以为文案引导,也可以为虚拟动画引导,本公开对此不作限定。
在首次呈现的礼物赠送引导信息消失5秒后,用户仍未通过手部模型握持面板模板内的体验礼物,那么会呈现一个通过手部模型握持该体验礼物的虚拟引导动画,或者在体验礼物旁边出现一根软线连接到一条引导文案,该引导文案可以为“触摸并通过抓取(Grab)键来握持该礼物”。在通过手部模型握持该体验礼物后,如果超过7秒仍未在虚拟空间内投掷出该体验礼物,那么会继续呈现一个通过手部模型投掷出该体验礼物的虚拟引导动画,或者在体验礼物旁边出现一根软线连接到一条引导文案,该引导文案可以为“投掷同时松开Grab键来送出该礼物”,以按照上述虚拟引导动画或引导文案,通过手部模型在虚拟空间内将所握持的体验礼物投掷出去,表示完成一次礼物赠送流程。
S630,在体验模式下完成体验礼物的成功赠送后,在虚拟空间内呈现体验模型的退出信息。
在体验模式下完成一次体验礼物的成功赠送后,如图25所示,会在虚拟空间内呈现一个体验成功弹窗,该体验成功弹窗内可以设定有礼物赠送体验成功的确认控件。通过手部模型触发该确认控件,可以退出该体验模式。
S640,响应于体验模式的退出操作,在虚拟空间内呈现近身礼物面板。
在检测到通过手部模型作用于体验成功弹窗内的确认控件上的触发操作时,可以生成该体验模式的退出操作。然后,响应于体验模式的退出操作,在虚拟空间内取消面板模板的呈现,而在虚拟空间内呈现出近身礼物面板,并展示相应的扣费入口,以执行相应的礼物赠送操作。
在一些实施例中,针对虚拟空间内手部模型面向近身礼物面板内任一虚拟礼物的赠送操作,如图26所示,本公开可以采用下述步骤对在虚拟空间内通过手部模型赠送任一虚拟礼物的过程进行说明:
S810,响应于虚拟空间内手部模型面向近身礼物面板内任一虚拟礼物的握持操作,确定手部模型握持虚拟礼物时的运动位姿变化量。
在一些实施例中,为了增加用户在虚拟空间内赠送虚拟礼物时的互动操作,避免通过手柄光标和Trigger键来赠送礼物时的单一互动,可以通过手柄操作或手势操作等,控制虚拟空间内模拟出的手部模型进行相应运动,来对近身礼物面板内的任一虚拟礼物执行相应的握持操作。
然后,为了增强虚拟空间内礼物赠送时的用户互动氛围,会要求手部模型在握持
有任一虚拟礼物后,能够在虚拟空间内执行一些指示礼物赠送相关的各种运动,以便模拟出实际的礼物赠送过程,增强虚拟空间内礼物赠送的多样化互动。
因此,响应于手部模型面向任一虚拟礼物的握持操作,可以支持用户在XR设备内输入相应的用于指示手部模型所执行的运动的一些运动信息,例如操控手柄上的各个方向按键,操控手柄执行相应的运动,或者控制手部执行相应的运动手势等,均可以表示手部模型在握持有任一虚拟礼物后所需执行的运动。根据此类运动信息,即可生成用户面向手部模型发起的运动指令。
进而,通过解析用户面向手部模型发起的运动指令,可以确定手部模型握持有虚拟礼物后实际需要执行的各项运动信息,以控制手部模型握持着该虚拟礼物,在虚拟空间内执行相应的运动。而且,在手部模型在虚拟空间内的实际运动过程中,本公开还需要实时确定手部模型在握持有虚拟礼物后的运动位姿变化量,以便判断该运动位姿变化量是否满足所握持的该虚拟礼物的赠送触发条件。
S820,在运动位姿变化量满足虚拟礼物的赠送触发条件时,确定手部模型面向虚拟礼物的赠送操作。
为了保证虚拟礼物在虚拟空间内的准确赠送,本公开可以针对任一虚拟礼物的实际赠送操作,可以预先为该虚拟礼物设定一个赠送触发条件。
在本公开中,可以将虚拟空间内的虚拟礼物分为可投掷礼物和不可投掷礼物两种,以增强虚拟空间内礼物赠送的多样性。其中,可投掷礼物可以为支持手部模型在虚拟空间内通过直接投掷而成功赠送给主播的一些虚拟礼物,例如单独的表情礼物、心形礼物等。不可投掷礼物可以为存于相应的礼物赠送道具内,需要通过手部模型与所握持的该礼物赠送道具来执行相应的交互操作,而成功赠送给主播的另一些虚拟礼物,例如通过泡泡棒赠送的泡泡礼物、通过加热装置来启动的热气球礼物、通过等。
在一些实施例中,针对虚拟礼物中的可投掷礼物,可以设定可投掷礼物的赠送触发条件为:手部模型在执行完一次投掷运动或连续投掷运动后处于握持取消位姿下。也就是说,通过手部模型在握持可投掷礼物后执行的运动位姿变化量,判断手部模型是否执行过一次投掷运动或者连续投掷运动。而且,在执行完上述运动后,是否取消对该可投掷礼物的握持,使手部模型最终处于握持取消位姿下。如果满足上述可投掷礼物的赠送触发条件,则说明通过手部模型将所握持的虚拟礼物在虚拟空间内投掷出去,也就表示在虚拟空间内需要向主播赠送该可投掷礼物。
进一步地,针对虚拟礼物中的不可投掷礼物,可以设定不可投掷礼物的赠送触发
条件为:手部模型中与礼物赠送道具互动的目标部位执行礼物赠送道具设定的礼物赠送操作。其中,手部模型在握持不可投掷礼物时,会表示为手部模型握持用于存入不可投掷礼物的礼物增道具。例如,通过泡泡棒赠送的泡泡礼物,通过手部模型会从近身礼物面板中握持该泡泡棒,以发出相应的泡泡礼物。然后,通过手部模型在握持不可投掷礼物后执行的运动位姿变化量,判断手部模型中与礼物赠送道具互动的目标部位是否执行该礼物赠送道具设定的礼物赠送操作。如果满足上述不可投掷礼物的赠送触发条件,则说明通过手部模型中的目标部位与礼物赠送道具进行相应互动,可以将礼物赠送道具内的不可投掷礼物在虚拟空间内发出去,也就表示在虚拟空间内需要向主播赠送该不可投掷礼物。
在一些实施例中,在确定出手部模型握持虚拟礼物后所执行的运动位姿变化量后,本公开首先确定出所握持的该虚拟礼物的赠送触发条件。然后,通过判断手部模型在握持虚拟礼物后所执行的运动位姿变化量是否满足该虚拟礼物的赠送触发条件,来判断是否需要在虚拟空间内向主播赠送该虚拟礼物。在该运动位姿变化量满足所握持的虚拟礼物的赠送触发条件时,说明用户指示向主播赠送该虚拟礼物,以此确定出通过手部模型面向该虚拟礼物的赠送操作。
后续响应于手部模型面向该虚拟礼物的赠送操作,可以在虚拟空间内呈现该虚拟礼物的赠送特效。
需要说明的是,考虑到通过手部模型向虚拟空间内的主播赠送相应的虚拟礼物时,可以执行一次赠送操作,也可以执行连续赠送操作,使得虚拟礼物存在不同的赠送类型。所以,本公开在虚拟空间内呈现虚拟礼物的赠送特效时,还会根据手部模型在握持虚拟礼物后的运动位姿变化量,判断手部模型所执行的运动是一次赠送还是连续赠送,从而确定该虚拟礼物的赠送类型。然后,在虚拟空间内可以呈现该虚拟礼物在相应赠送类型下的赠送特效。
另一方面,由于通过手部模型握持任一虚拟礼物,而带动该虚拟礼物在虚拟空间内执行相应的运动后,可能会存在用户主动放弃向主播赠送该虚拟礼物的情况。那么,针对用户主动放弃向主播赠送该虚拟礼物的这种情况,在还未向主播赠送所握持的虚拟礼物时,便会通过手部模型取消对该虚拟礼物的握持,来表示用户主动放弃赠送该虚拟礼物。所以,本公开可以通过判断手部模型在并未执行相应礼物赠送操作的情况下,是否取消对虚拟礼物的握持,来设定一个归位条件。
进而,在确定出手部模型握持该虚拟礼物时的运动位姿变化量时,在该运动位姿
变化量满足虚拟礼物的归位条件时,在虚拟空间内,控制虚拟礼物从手部模型折回到近身礼物面板上的原位置处。
也就是说,本公开可以判断该运动位姿变化量是否满足该虚拟礼物的归位条件,来判断用户是否主动放弃赠送该虚拟礼物。在该运动位姿变化量表示手部模型在并未执行赠送虚拟礼物的操作的情况下,却执行握持取消操作时,说明该运动位姿变化量满足该虚拟礼物的归位条件。所以,在虚拟空间内,可以控制不再被手部模型握持的虚拟礼物从手部模型所在的位置处折回到该虚拟礼物在礼物面板上的原位置处。
在一些实施例中,考虑到手部模型取消对虚拟礼物的握持,可能存在如下两种情况:1)通过手部模型在执行相应运动后的任一运动点下,直接取消对虚拟礼物的握持,也就是手部模型执行相应运动后,在任一运动点原地执行手部模型对虚拟礼物的握持取消操作。2)通过手部模型带动所握持的虚拟礼物执行相应运动,而返回到礼物面板上的原位置上方后,取消对虚拟礼物的握持,也就是通过手部模型带动虚拟礼物从任一运动点返回到礼物面板上的原位置上方后,再执行手部模型对虚拟礼物的握持取消操作。
那么,针对上述第一种情况,本公开可以设定一个第一归位条件,该第一归位条件可以为:手部模型在握持虚拟礼物时的任一运动点下执行握持取消操作。也就是说,手部模型在握持虚拟礼物后运动到任一运动点下,在该运动点原地执行握持取消操作。所以,在手部模型握持虚拟礼物时的运动位姿变化量满足该第一归位条件时,本公开可以控制该虚拟礼物从手部模型向下执行预设垂直运动后,折回到礼物面板上的原位置处。
也就是说,为了模拟出虚拟礼物在被手部模型松开后的重力影响,可以控制虚拟礼物向下执行一小段时间的预设垂直运动。该预设垂直运动的向下运动距离可以根据手部模型对虚拟礼物取消握持时所在的位置高度和礼物面板所在的位置高度来确定。通常情况下,手部模型对虚拟礼物取消握持时所在的位置高度会大于礼物面板所在的位置高度。例如,假设手部模型对虚拟礼物取消握持时所在的位置高度为A,礼物面板所在的位置高度为B,那么,预设垂直运动的向下运动距离可以为0.2*(A-B)。然后,在该虚拟礼物向下执行完该预设垂直运动后,再控制该虚拟礼物折回到其在礼物面板上的原位置处。
针对上述第二种情况,本公开可以设定一个第二归位条件,该第二归位条件可以为:手部模型握持虚拟礼物而运动至虚拟礼物在礼物面板上的原位置上方后,执行握
持取消操作。也就是说,手部模型在握持虚拟礼物后运动而返回到其在礼物面板上的原位置上方后,在该原位置上方执行握持取消操作。所以,在手部模型握持虚拟礼物时的运动位姿变化量满足该第二归位条件时,本公开可以控制该虚拟礼物从当前位置折回到礼物面板上的原位置处。
也就是说,在手部模型带动虚拟礼物返回到其在礼物面板上的原位置上方时,可以在该虚拟礼物被取消握持后,直接控制该虚拟礼物从该原位置上方的当前位置处,折回到其在礼物面板上的原位置处。
此外,在通过手部模型握持任一虚拟礼物时,为了保证手部模型对虚拟礼物的有效互动,本公开会针对手部模型握持虚拟礼物的握持时长设定一个预设握持上限。如果在手部模型握持虚拟礼物的时长达到预设握持上限时,该手部模型握持虚拟礼物时的运动位姿变化量还未满足虚拟礼物的赠送触发条件或者归位条件,那么可以在虚拟空间内呈现一次该虚拟礼物的赠送引导信息,该赠送引导信息用于指示通过手部模型在虚拟空间内赠送该虚拟礼物时所需要执行的赠送操作,以引导用户控制手部模型对所握持的虚拟礼物准确执行相应的赠送操作。
图27为本公开实施例提供的一种人机交互装置的示意图,该人机交互装置900可以配置于XR设备中,该人机交互装置900包括:
礼物面板呈现模块910,用于响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现对应的近身礼物面板;
礼物赠送模块920,用于响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
在一些实施例中,礼物面板呈现模块910,可以用于:
响应于所述虚拟空间内远身礼物入口的光标选中操作,在所述虚拟空间内呈现对应的近身礼物面板;
其中,所述远身礼物入口位于所述虚拟空间内的任一虚拟屏幕内。
在一些实施例中,该虚拟礼物互动装置900,还可以包括:
第一收起模块,用于响应于所述手部模型面向所述近身礼物面板内收起控件的触发操作,在所述虚拟空间内收起所述近身礼物面板。
在一些实施例中,该虚拟礼物互动装置900,还可以包括:
远身特效播放模块,用于在所述虚拟空间内播放所述近身礼物面板的远身呈现特
效或远身收起特效。
在一些实施例中,礼物面板呈现模块910,还可以用于:
响应于所述手部模型面向所述虚拟空间内近身礼物入口的悬停确认操作,在所述虚拟空间内呈现对应的近身礼物面板;
其中,所述近身礼物入口位于所述虚拟空间内的用户近身位置处,且默认为第一虚影状态。
在一些实施例中,该虚拟礼物互动装置900,还可以包括:
状态变换模块,用于响应于所述手部模型面向所述近身礼物入口的悬停操作,控制所述近身礼物入口从所述虚影状态变换为激活状态。
在一些实施例中,状态变换模块,还可以用于:
控制所述近身礼物入口从所述激活状态变换为第二虚影状态。
在一些实施例中,该虚拟礼物互动装置900,还可以包括:
第二收起模块,用于响应于所述手部模型面向所述近身礼物入口的再次悬停确认操作,在所述虚拟空间内收起所述近身礼物面板。
在一些实施例中,该虚拟礼物互动装置900,还可以包括:
近身特效播放模块,用于在所述虚拟空间内播放所述近身礼物面板的近身呈现特效或近身收起特效。
在一些实施例中,状态变换模块,还可以用于:
控制所述近身礼物入口变换回第一虚影状态。
在一些实施例中,礼物面板呈现模块910,还可以用于:
在虚拟现实空间内的预定位置处呈现对应的近身礼物面板,所述预定位置与用户间的距离满足预定距离要求。
在一些实施例中,礼物赠送模块920,可以用于:
确定所述虚拟空间内已设定的礼物赠送安全区域;
根据所述礼物赠送安全区域,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
在一些实施例中,所述虚拟空间内的直播内容显示区域、所述礼物赠送安全区域和所述近身礼物面板处于不同的空间平面内,且按照预设距离和区域大小在所述虚拟空间内依次分布。
在一些实施例中,所述虚拟礼物的赠送特效包括:所述虚拟礼物在所述虚拟空间内的空间投掷轨迹,以及基于所述空间投掷轨迹或/和所述虚拟礼物而设定的投掷特效。
在一些实施例中,所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,通过赠送操作确定模块确定。该赠送操作确定模块,可以用于:
响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的握持操作,确定所述手部模型握持所述虚拟礼物时的运动位姿变化量;
在所述运动位姿变化量满足所述虚拟礼物的赠送触发条件时,确定所述手部模型面向所述虚拟礼物的赠送操作。
在一些实施例中,该人机交互装置900,还可以包括:
礼物归位模块,用于在所述运动位姿变化量满足所述虚拟礼物的归位条件时,在虚拟空间内,控制所述虚拟礼物从所述手部模型折回到所述近身礼物面板上的原位置处。
在一些实施例中,该人机交互装置900,还可以包括:
赠送引导模块,用于如果所述手部模型握持所述虚拟礼物的时长达到预设握持上限时,所述运动位姿变化量还未满足所述虚拟礼物的赠送触发条件或归位条件,则在虚拟空间内呈现所述虚拟礼物的赠送引导信息。
在一些实施例中,礼物面板呈现模块910,还可以用于:
如果所述近身礼物面板在所述虚拟空间内的本次呈现为预设次数内的呈现,则在所述虚拟空间内呈现所述近身礼物面板在体验模式下的面板模板;
根据所述体验模式下的礼物赠送引导信息,控制所述手部模型对所述面板模板内的体验礼物执行相应的赠送操作;
在所述体验模式下完成所述体验礼物的成功赠送后,在所述虚拟空间内呈现所述体验模型的退出信息;
响应于所述体验模式的退出操作,在所述虚拟空间内呈现所述近身礼物面板。
在一些实施例中,该人机交互装置900,还可以包括:
面板翻页模块,用于响应于所述近身礼物面板的翻页操作,控制所述近身礼物面板进行翻页,并跟随所述近身礼物面板的翻页来呈现新的虚拟礼物。
在一些实施例中,所述近身礼物面板的翻页操作包括如下至少一项:
所述手部模型面向所述近身礼物面板的拖曳翻页操作;
所述手部模型面向所述近身礼物面板内翻页控件的触发操作;
所述手部模型面向手柄模型内摇杆组件的拨动操作。
在一些实施例中,该人机交互装置900,还可以包括:
震动模块,用于根据所述手部模型面向所述虚拟空间内任一虚拟对象执行的不同交互操作,控制所述XR设备执行不同程度的震动。
本公开实施例中,响应于虚拟空间内礼物入口的触发操作,可以在虚拟空间内呈现对应的近身礼物面板,该近身礼物面板处于用户身边,支持用户通过手部模型来对近身礼物面板内的任一虚拟礼物执行相关赠送操作。所以,响应于手部模型面向任一虚拟礼物的赠送操作,即可在虚拟空间内呈现该虚拟礼物的赠送特效,无需通过光标射线选中某一虚拟礼物后采用手柄Trigger键来呈现对应的礼物赠送特效,从而增加用户在虚拟空间内赠送虚拟礼物时的互动操作,实现虚拟空间内虚拟礼物的多样化互动,增强虚拟空间内虚拟礼物的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。
应理解的是,该装置实施例与本公开中的方法实施例可以相互对应,类似的描述可以参照本公开中的方法实施例。为避免重复,此处不再赘述。
例如,图27所示的装置900可以执行本公开提供的任一方法实施例,并且图27所示的装置900中的各个模块的前述和其它操作和/或功能分别为了实现上述方法实施例的相应流程,为了简洁,在此不再赘述。
上文中结合附图从功能模块的角度描述了本公开实施例的上述方法实施例。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。例如,本公开实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本公开实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。例如,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图28是本公开实施例提供的电子设备的示意性框图。
如图28所示,该电子设备1000可包括:
存储器1010和处理器1020,该存储器1010用于存储计算机程序,并将该程序代码传输给该处理器1020。换言之,该处理器1020可以从存储器1010中调用并运行计算机程序,以实现本公开实施例中的方法。
例如,该处理器1020可用于根据该计算机程序中的指令执行上述方法实施例。
在本公开的一些实施例中,该处理器1020可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本公开的一些实施例中,该存储器1010包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本公开的一些实施例中,该计算机程序可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器1010中,并由该处理器1020执行,以完成本公开提供的方法。该一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序在该电子设备1000的执行过程。
如图28所示,该电子设备还可包括:
收发器1030,该收发器1030可连接至该处理器1020或存储器1010。
其中,处理器1020可以控制该收发器1030与其他设备进行通信,例如,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器1030可以包括发射机和接收机。收发器1030还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该电子设备1000中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
本公开还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。
本公开实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本公开实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
以上仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以该权利要求的保护范围为准。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任
意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于用户进入虚拟现实空间的操作,在虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息,且,在虚拟现实空间中的多个空间位置区域分别显示多个图层,其中,各个空间位置区域与用户当前观看位置的显示距离不同,且位于虚拟屏幕前方虚拟现实视频信息所述虚拟现实视频信息。由此,实现了在虚拟屏幕和用户当前观看位置之间对视频有关图层的结构化显示,通过图层与用户当前观看位置的距离的层级设置,充分利用了虚拟现实空间中的深度信息,实现了立体化的显示效果,有助于提升用户的观看体验。
电子设备可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框
可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合
地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
Claims (45)
- 一种基于虚拟现实空间的显示处理方法,包括:响应于用户进入虚拟现实空间的操作,在所述虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息;在所述虚拟现实空间中的多个空间位置区域,分别显示多个图层其中,所述多个空间位置区域中的各个空间位置区域与所述用户的当前观看位置的显示距离不同,且均位于所述虚拟屏幕前方。
- 如权利要求1所述的显示处理方法,其中,所述在所述虚拟现实空间中的多个空间位置区域,分别显示多个图层包括:获取与所述虚拟现实视频信息对应的多个图层的图层类型;根据所述图层类型,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域;根据所述多个空间位置区域中的每一个,对所述多个空间位置区域中的每一个对应的图层中的图层消息进行显示处理。
- 如权利要求1或2所述的显示处理方法,其中,所述根据所述图层类型,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域包括:根据所述图层类型,确定所述多个图层的优先级信息;根据所述优先级信息,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域。
- 如权利要求1-3任一项所述的显示处理方法,其中,所述根据所述优先级信息,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域,包括:确定所述虚拟现实视频信息在所述虚拟现实空间中的视频显示位置;以所述视频显示位置为起点,根据预设距离间隔和所述优先级信息的由低到高的顺序,向靠近所述用户的当前观看位置的方向,逐个确定所述多个图层中的每个图层的空间位置区域,以确定所述多个图层的所述多个空间位置区域。
- 如权利要求3或4所述的显示处理方法,其中,所述根据优先级信息,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域,包括:根据所述优先级信息,确定最高优先级对应的目标图层;确定所述目标图层在所述虚拟现实空间中的空间位置区域;以所述空间位置区域为起点,根据预设距离间隔和所述优先级信息的由高到低的顺序向远离所述用户的当前观看位置的方向,逐个确定所述多个图层中每个其他图层的空间位置区域,以确定所述多个图层的所述多个空间位置区域。
- 如权利要求3~5任一项所述的显示处理方法,其中,所述根据优先级信息,确定所述多个图层在所述虚拟现实空间中的所述多个空间位置区域包括:根据所述优先级信息,查询预设数据库,以获取与所述多个图层对应的多个空间位置区域。
- 如权利要求1-6任一所述的显示处理方法,其中,所述多个图层中每个图层对应的空间位置区域位于所述每个图层对应的圆弧形区域上,其中,所述每个图层对应的圆弧形区域的圆心位置,位于所述用户的当前观看位置对应的用户视线方向上。
- 如权利要求1-7任一所述的显示处理方法,其中:所述各个空间位置区域的垂直坐标值不同;所述各个空间位置区域对应的水平坐标值不完全相同,和/或,所述各个空间位置区域对应的竖直坐标值不完全相同。
- 如权利要求2-8任一所述的显示处理方法,其中,所述多个图层对应的图层类型包括操作用户界面图层、信息流显示图层、礼物显示图层或者表情显示图层中的多个。
- 如权利要求2-8任一项所述的显示处理方法,其中,所述根据所述多个空间位置区域中的每一个,对所述多个空间位置区域中的每一个对应的图层中的图层消息进行显示处理包括:响应于获取到预设第一类型的第一图层消息,确定与所述第一图层消息对应的第一图层;在所述第一图层的空间位置区域中,显示所述第一图层消息。
- 如权利要求10所述的显示处理方法,其中,所述根据所述多个空间位置区域中的每一个,对所述多个空间位置区域中的每一个对应的图层中的图层消息进行显示处理包括:响应于获取到预设第二类型的第二图层消息,将所述第二图层消息加入与所述第二图层消息对应的第二图层的图层消息队列,以便于根据在所述图层消息队列中的队列顺序,在所述第二图层的空间位置区域中显示对应的图层消息。
- 一种基于虚拟现实空间的显示处理装置,包括:显示处理模块,用于响应于用户进入虚拟现实空间的操作,在所述虚拟现实空间中的虚拟屏幕内呈现虚拟现实视频信息,在所述虚拟现实空间中的多个空间位置区域,分别显示多个图层其中,所述多个空间位置区域中的各个空间位置区域与所述用户的当前观看位置的显示距离不同,且均位于所述虚拟屏幕前方。
- 一种基于虚拟现实空间的模型显示方法,包括:响应于礼物显示指令,生成与所述礼物显示指令对应的目标礼物模型;在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层,其中,所述多个图层中各个图层对应的空间位置区域与用户的当前观看位置的显示距离不同;在所述目标显示图层对应的目标空间位置区域上,显示所述目标礼物模型。
- 如权利要求13所述的模型显示方法,在所述确定与所述目标礼物模型对应的目标显示图层之前,所述模型显示方法还包括:获取所述多个图层的图层类型;根据所述图层类型,确定所述多个图层在虚拟现实空间中的多个目标空间位置区域,其中,所述多个目标空间位置区中的各个所述目标空间位置区域与所述用户的当前观看位置的显示距离不同。
- 如权利要求14所述的模型显示方法,其中,所述根据所述图层类型,确定所述多个图层在虚拟现实空间中的多个目标空间位置区域包括:根据所述多个图层类型,确定所述各个图层与所述用户的当前观看位置对应的显示距离;以所述用户的当前观看位置为圆心,并以与所述用户的当前观看位置对应的显示距离为半径,向用户视线方向延伸以确定所述各个图层的圆弧形区域;根据所述圆弧形区域,确定所述多个图层在所述虚拟现实空间中的所述多个目标空间位置区域。
- 如权利要求13-15任一项所述的模型显示方法,其中,所述生成与所述礼物显示指令对应的目标礼物模型包括:确定与所述礼物显示指令对应的礼物模型渲染信息;获取与所述礼物显示指令对应的操作对象信息;根据所述物模型渲染信息和所述操作对象信息,生成与所述礼物显示指令对应的 目标礼物模型。
- 如权利要求13-16任一所述的模型显示方法,其中,所述在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层包括:识别所述目标礼物模型的第一优先等级信息;确定所述多各个图层的第二优先等级信息;确定与所述第一优先等级信息匹配的第二优先等级信息对应的图层为所述目标显示图层。
- 如权利要求13-17任一所述的模型显示方法,其中,所述在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层包括:识别所述目标礼物模型的礼物类型;在所述多个图层中,确定与所述礼物类型匹配的图层为所述目标显示图层。
- 如权利要求13-18任一项所述的模型显示方法,其中,所述在所述目标显示图层对应的目标空间位置区域上显示所述目标礼物模型包括:确定所述目标礼物模型的显示路径,根据所述显示路径,控制所述目标礼物模型在所述目标空间位置区域上显示;和/或,确定所述目标礼物模型的显示时长,根据所述显示时长,控制所述目标礼物模型在所述目标空间位置区域上显示中显示。
- 一种基于虚拟现实空间的模型显示装置,包括:生成模块,用于响应于礼物显示指令,生成与所述礼物显示指令对应的目标礼物模型;确定模块,用于在当前播放的虚拟现实视频对应的多个图层中,确定与所述目标礼物模型对应的目标显示图层,其中,所述多个图层中各个图层对应的空间位置区域与用户的当前观看位置的显示距离不同;显示模块,用于在所述目标显示图层对应的目标空间位置区域上,显示所述目标礼物模型。
- 一种人机交互方法,应用于扩展现实XR设备,所述人机交互方法包括:响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板;响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
- 根据权利要求21所述的人机交互方法,其中,所述响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板包括:响应于所述虚拟空间内远身礼物入口的光标选中操作,在所述虚拟空间内呈现所述远身礼物入口对应的近身礼物面板,其中,所述远身礼物入口位于所述虚拟空间内的任一虚拟屏幕内。
- 根据权利要求21或22所述的人机交互方法,还包括:响应于所述手部模型面向所述近身礼物面板内收起控件的触发操作,在所述虚拟空间内收起所述近身礼物面板。
- 根据权利要求22或23所述的人机交互方法,还包括:在所述虚拟空间内播放所述近身礼物面板的远身呈现特效或远身收起特效。
- 根据权利要求21-24任一项所述的人机交互方法,其中,所述响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现对应的近身礼物面板包括:响应于所述手部模型面向所述虚拟空间内近身礼物入口的悬停确认操作,在所述虚拟空间内呈现所述近身礼物入口对应的近身礼物面板,其中,所述近身礼物入口位于所述虚拟空间内的用户近身位置处,且默认为第一虚影状态。
- 根据权利要求25所述的人机交互方法,还包括:响应于所述手部模型面向所述近身礼物入口的悬停操作,控制所述近身礼物入口从所述虚影状态变换为激活状态。
- 根据权利要求26所述的人机交互方法,在所述虚拟空间内呈现所述近身礼物入口对应的近身礼物面板之后,所述人机交互方法还包括:控制所述近身礼物入口从所述激活状态变换为第二虚影状态。
- 根据权利要求25-27任一项所述的人机交互方法,在所述虚拟空间内呈现所述近身礼物入口对应的近身礼物面板之后,所述人机交互方法还包括:响应于所述手部模型面向所述近身礼物入口的再次悬停确认操作,在所述虚拟空间内收起所述近身礼物面板。
- 根据权利要求25-28任一项所述的人机交互方法,还包括:在所述虚拟空间内播放所述近身礼物面板的近身呈现特效或近身收起特效。
- 根据权利要求28或29所述的人机交互方法,在所述虚拟空间内收起所述近身礼物面板之后,所述人机交互方法还包括:控制所述近身礼物入口变换回第一虚影状态。
- 根据权利要求21-30任一项所述的人机交互方法,其中,所述在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板包括:在虚拟现实空间内的预定位置处呈现所述礼物入口对应的近身礼物面板,所述预定位置与用户间的距离满足预定距离要求。
- 根据权利要求21-30任一项所述的人机交互方法,其中,所述在所述虚拟空间内呈现所述虚拟礼物的赠送特效包括:确定所述虚拟空间内已设定的礼物赠送安全区域;根据所述礼物赠送安全区域,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
- 根据权利要求32所述的人机交互方法,其中,所述虚拟空间内的直播内容显示区域、所述礼物赠送安全区域和所述近身礼物面板处于不同的空间平面内,且按照预设距离和区域大小在所述虚拟空间内依次分布。
- 根据权利要求21-33任一项所述的人机交互方法,其中,所述虚拟礼物的赠送特效包括基于所述空间投掷轨迹或所述虚拟礼物而设定的投掷特效中的至少一项,以及所述虚拟礼物在所述虚拟空间内的空间投掷轨迹。
- 根据权利要求21-33任一项所述的人机交互方法,其中,所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,通过下述步骤确定:响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的握持操作,确定所述手部模型握持所述虚拟礼物时的运动位姿变化量;在所述运动位姿变化量满足所述虚拟礼物的赠送触发条件的情况下,确定所述手部模型面向所述虚拟礼物的赠送操作。
- 根据权利要求35所述的人机交互方法,还包括:在所述运动位姿变化量满足所述虚拟礼物的归位条件的情况下,在虚拟空间内,控制所述虚拟礼物从所述手部模型折回到所述近身礼物面板上的所述虚拟礼物的原位置处。
- 根据权利要求36所述的人机交互方法,还包括:在所述手部模型握持所述虚拟礼物的时长达到预设握持上限,所述运动位姿变化量还未满足所述虚拟礼物的赠送触发条件或归位条件的情况下,在虚拟空间内呈现所述虚拟礼物的赠送引导信息。
- 根据权利要求21-37任一项所述的人机交互方法,其中,所述在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板包括:在所述近身礼物面板在所述虚拟空间内的本次呈现为预设次数内的呈现的情况下,在所述虚拟空间内呈现所述近身礼物面板在体验模式下的面板模板;根据所述体验模式下的礼物赠送引导信息,控制所述手部模型对所述面板模板内的体验礼物执行所述礼物赠送引导信息相应的赠送操作;在所述体验模式下完成所述体验礼物的成功赠送后,在所述虚拟空间内呈现所述体验模型的退出信息;响应于所述体验模式的退出操作,在所述虚拟空间内呈现所述近身礼物面板。
- 根据权利要求21-38任一项所述的人机交互方法,在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板之后,所述人机交互方法还包括:响应于所述近身礼物面板的翻页操作,控制所述近身礼物面板进行翻页,并跟随所述近身礼物面板的翻页来呈现新的虚拟礼物。
- 根据权利要求39所述的人机交互方法,其中,所述近身礼物面板的翻页操作包括如下至少一项:所述手部模型面向所述近身礼物面板的拖曳翻页操作;所述手部模型面向所述近身礼物面板内翻页控件的触发操作;或者所述手部模型面向手柄模型内摇杆组件的拨动操作。
- 根据权利要求21-40任一项所述的人机交互方法,还包括:根据所述手部模型面向所述虚拟空间内任一虚拟对象执行的不同交互操作,控制所述XR设备执行不同程度的震动。
- 一种人机交互装置,,配置于XR设备,所述人机交互装置包括:礼物面板呈现模块,用于响应于虚拟空间内礼物入口的触发操作,在所述虚拟空间内呈现所述礼物入口对应的近身礼物面板;礼物赠送模块,用于响应于所述虚拟空间内手部模型面向所述近身礼物面板内任一虚拟礼物的赠送操作,在所述虚拟空间内呈现所述虚拟礼物的赠送特效。
- 一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-11中任一所述的基于虚拟现实空间的显示处理方法、权利要求13-19中任一所述的基于虚拟现实空间的模型显示方法、或者权利要求21-41中任一 所述的人机交互方法。
- 一种非易失性计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-11中任一所述的基于虚拟现实空间的显示处理方法、权利要求13-19中任一所述的基于虚拟现实空间的模型显示方法、或者权利要求21-41中任一所述的人机交互方法。
- 一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-11中任一所述的基于虚拟现实空间的显示处理方法、权利要求13-19中任一所述的基于虚拟现实空间的模型显示方法、或者权利要求21-41中任一所述的人机交互方法。
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210995392.4A CN117641026A (zh) | 2022-08-18 | 2022-08-18 | 基于虚拟现实空间的模型显示方法、装置、设备及介质 |
CN202210995392.4 | 2022-08-18 | ||
CN202210993897.7 | 2022-08-18 | ||
CN202210993897.7A CN117632063A (zh) | 2022-08-18 | 2022-08-18 | 基于虚拟现实空间的显示处理方法、装置、设备及介质 |
CN202211124457.4A CN117742476A (zh) | 2022-09-15 | 2022-09-15 | 人机交互方法、装置、设备和存储介质 |
CN202211124457.4 | 2022-09-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024037565A1 true WO2024037565A1 (zh) | 2024-02-22 |
Family
ID=89940730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/113360 WO2024037565A1 (zh) | 2022-08-18 | 2023-08-16 | 人机交互方法、基于虚拟现实空间的显示处理方法、模型显示方法、装置、设备及介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024037565A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106683161A (zh) * | 2016-12-13 | 2017-05-17 | 中国传媒大学 | 基于图像分割与自定义图层法的增强现实遮挡方法 |
CN108391153A (zh) * | 2018-01-29 | 2018-08-10 | 北京潘达互娱科技有限公司 | 虚拟礼物显示方法、装置及电子设备 |
US20180373803A1 (en) * | 2017-05-16 | 2018-12-27 | Apple Inc. | Device, Method, and Graphical User Interface for Managing Website Presentation Settings |
CN111105491A (zh) * | 2019-11-25 | 2020-05-05 | 腾讯科技(深圳)有限公司 | 场景渲染方法、装置、计算机可读存储介质和计算机设备 |
CN111526412A (zh) * | 2020-04-30 | 2020-08-11 | 广州华多网络科技有限公司 | 全景直播方法、装置、设备及存储介质 |
CN114466211A (zh) * | 2022-01-30 | 2022-05-10 | 乐美客信息技术(深圳)有限公司 | 一种基于虚拟现实技术的直播交互方法及系统 |
CN114760519A (zh) * | 2022-04-20 | 2022-07-15 | 广州方硅信息技术有限公司 | 基于直播间礼物特效的互动方法、装置、设备及存储介质 |
-
2023
- 2023-08-16 WO PCT/CN2023/113360 patent/WO2024037565A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106683161A (zh) * | 2016-12-13 | 2017-05-17 | 中国传媒大学 | 基于图像分割与自定义图层法的增强现实遮挡方法 |
US20180373803A1 (en) * | 2017-05-16 | 2018-12-27 | Apple Inc. | Device, Method, and Graphical User Interface for Managing Website Presentation Settings |
CN108391153A (zh) * | 2018-01-29 | 2018-08-10 | 北京潘达互娱科技有限公司 | 虚拟礼物显示方法、装置及电子设备 |
CN111105491A (zh) * | 2019-11-25 | 2020-05-05 | 腾讯科技(深圳)有限公司 | 场景渲染方法、装置、计算机可读存储介质和计算机设备 |
CN111526412A (zh) * | 2020-04-30 | 2020-08-11 | 广州华多网络科技有限公司 | 全景直播方法、装置、设备及存储介质 |
CN114466211A (zh) * | 2022-01-30 | 2022-05-10 | 乐美客信息技术(深圳)有限公司 | 一种基于虚拟现实技术的直播交互方法及系统 |
CN114760519A (zh) * | 2022-04-20 | 2022-07-15 | 广州方硅信息技术有限公司 | 基于直播间礼物特效的互动方法、装置、设备及存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12061745B2 (en) | Gesture input with multiple views, displays and physics | |
US10916065B2 (en) | Prevention of user interface occlusion in a virtual reality environment | |
US20200074742A1 (en) | User Interface Security in a Virtual Reality Environment | |
JP7344974B2 (ja) | マルチ仮想キャラクターの制御方法、装置、およびコンピュータプログラム | |
JP6611733B2 (ja) | 表示装置の閲覧者の注視誘引 | |
JP3880561B2 (ja) | 表示システム | |
US20190339837A1 (en) | Copy and Paste in a Virtual Reality Environment | |
US20190340818A1 (en) | Display Reorientation in a Virtual Reality Environment | |
US11941764B2 (en) | Systems, methods, and graphical user interfaces for adding effects in augmented reality environments | |
KR102251252B1 (ko) | 가상 현실에서의 위치 지구본 | |
CN107787472A (zh) | 用于虚拟现实中的凝视交互的悬停行为 | |
WO2013095393A1 (en) | Augmented reality representations across multiple devices | |
JP2006294032A (ja) | 表示システム、表示制御装置、表示装置、表示方法、およびユーザインタフェイス装置 | |
US20240153219A1 (en) | Systems, Methods, and Graphical User Interfaces for Adding Effects in Augmented Reality Environments | |
US10846901B2 (en) | Conversion of 2D diagrams to 3D rich immersive content | |
CN110192169B (zh) | 虚拟场景中菜单处理方法、装置及存储介质 | |
WO2024174861A1 (zh) | 虚拟现实场景中的交互方法、装置、设备及存储介质 | |
WO2024037565A1 (zh) | 人机交互方法、基于虚拟现实空间的显示处理方法、模型显示方法、装置、设备及介质 | |
WO2024037559A1 (zh) | 信息交互方法、人机交互方法、装置、电子设备和存储介质 | |
US20240211092A1 (en) | Systems and methods of virtualized systems on electronic devices | |
US20240037865A1 (en) | Xr multi-window control | |
CN117742476A (zh) | 人机交互方法、装置、设备和存储介质 | |
CN118488280A (zh) | 人机交互方法、装置、设备和介质 | |
CN116206090A (zh) | 基于虚拟现实空间的拍摄方法、装置、设备及介质 | |
WO2024072595A1 (en) | Translating interactions on a two-dimensional interface to an artificial reality experience |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23854465 Country of ref document: EP Kind code of ref document: A1 |