CN117641025A - Model display method, device, equipment and medium based on virtual reality space - Google Patents

Model display method, device, equipment and medium based on virtual reality space Download PDF

Info

Publication number
CN117641025A
CN117641025A CN202210993895.8A CN202210993895A CN117641025A CN 117641025 A CN117641025 A CN 117641025A CN 202210993895 A CN202210993895 A CN 202210993895A CN 117641025 A CN117641025 A CN 117641025A
Authority
CN
China
Prior art keywords
model
gift
virtual reality
layer
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210993895.8A
Other languages
Chinese (zh)
Inventor
孟凡超
冀利悦
吴洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210993895.8A priority Critical patent/CN117641025A/en
Publication of CN117641025A publication Critical patent/CN117641025A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure relates to a model display method, device, equipment and medium based on a virtual reality space, wherein the method comprises the following steps: responding to the received model selection operation of the gift control model in the first layer, and displaying a gift selection panel containing a plurality of candidate gift models in the first layer, wherein the first layer is a layer with the smallest display distance with the current watching position of the user in a plurality of layers corresponding to the currently played virtual reality video; in response to receiving a gift display operation for a target gift model of the plurality of candidate gift models, a model animation corresponding to the target gift model is displayed in a virtual reality space. In the embodiment of the disclosure, interaction of gift sending in the virtual reality space is realized, depth information of the virtual reality space is fully utilized, VR display effect of gift animation is realized, and interaction experience of watching video of a user is improved.

Description

Model display method, device, equipment and medium based on virtual reality space
Technical Field
The disclosure relates to the technical field of virtual reality, and in particular relates to a model display method, device, equipment and medium based on virtual reality space.
Background
Virtual Reality (VR) technology, also known as Virtual environments, moods, or artificial environments, refers to technology that utilizes a computer to generate a Virtual world that can directly impart visual, auditory, and tactile sensations to participants and allow them to interactively observe and operate.
In the related art, when a live video is displayed in the real world, a user can realize the sending of a gift by clicking a gift control, so that how to realize the gift sending interaction in the virtual reality space has important significance for improving the sense of reality of video viewing in the virtual reality world.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a model display method, device, equipment and medium based on a virtual reality space, which implement interaction of gift sending in the virtual reality space, fully utilize depth information of the virtual reality space, implement VR display effect of gift animation, and promote interactive experience of watching video for users.
The embodiment of the disclosure provides a model display method based on a virtual reality space, which comprises the following steps: responding to the received model selection operation of the gift control model in a first layer, and displaying a gift selection panel containing a plurality of candidate gift models in the first layer, wherein the first layer is a layer with the smallest display distance with the current watching position of a user in a plurality of layers corresponding to the currently played virtual reality video; in response to receiving a gift display operation for a target gift model of the plurality of candidate gift models, a model animation corresponding to the target gift model is displayed in a virtual reality space.
The embodiment of the disclosure also provides a model display device based on the virtual reality space, which comprises: the display device comprises a first display module, a second display module and a display module, wherein the first display module is used for responding to the received model selection operation of a gift control model in a first layer, and displaying a gift selection panel comprising a plurality of candidate gift models in the first layer, wherein the first layer is a layer with the minimum display distance with the current watching position of a user in a plurality of layers corresponding to the currently played virtual reality video; and a second display module for displaying a model animation corresponding to a target gift model among the plurality of candidate gift models in a virtual reality space in response to receiving a gift display operation of the target gift model.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable operations; the processor is configured to read the executable operations from the memory and execute the operations to implement a model display method based on a virtual reality space as provided in an embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the virtual reality space-based model display method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the model display scheme based on the virtual reality space, a gift selection panel containing a plurality of candidate gift models is displayed in a first layer in response to receiving a model selection operation of a gift control model in the first layer, wherein the first layer is a layer with the smallest display distance with a current viewing position of a user among a plurality of layers corresponding to a currently played virtual reality video, and further, model animations corresponding to a target gift model in the plurality of candidate gift models are displayed in the virtual reality space in response to receiving a gift display operation of the target gift model. Therefore, interaction of gift sending in the virtual reality space is achieved, depth information of the virtual reality space is fully utilized, VR display effect of gift animation is achieved, and interaction experience of watching video of a user is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of a virtual reality device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a model display method based on a virtual reality space according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a hierarchical structure between multiple layers according to an embodiment of the disclosure;
fig. 4 is a schematic view of a model display scene based on a virtual reality space according to an embodiment of the disclosure;
fig. 5 is a schematic view of another model display scene based on a virtual reality space according to an embodiment of the disclosure;
FIG. 6A is a schematic diagram of another model display scene based on a virtual reality space according to an embodiment of the disclosure;
FIG. 6B is a schematic diagram of another model display scene based on virtual reality space according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of a model display device based on a virtual reality space according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Some technical concepts or noun concepts referred to herein are described in association with:
the virtual reality device, the terminal for realizing the virtual reality effect, may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The virtual reality devices described in embodiments of the present disclosure may include, but are not limited to, the following types:
a computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
Virtual reality objects, objects that interact in a virtual scene, objects that are stationary, moving, and performing various actions in a virtual scene, such as virtual persons corresponding to a user in a live scene, are controlled by a user or a robot program (e.g., an artificial intelligence based robot program).
As shown in fig. 1, HMDs are relatively light, ergonomically comfortable, and provide high resolution content with low latency. The sensor (such as a nine-axis sensor) for detecting the gesture in the virtual reality device is arranged in the virtual reality device, and is used for detecting the gesture change of the virtual reality device in real time, if the user wears the virtual reality device, when the gesture of the head of the user changes, the real-time gesture of the head is transmitted to the processor, so that the gaze point of the sight of the user in the virtual environment is calculated, an image in the gaze range (namely a virtual view field) of the user in the three-dimensional model of the virtual environment is calculated according to the gaze point, and the image is displayed on the display screen, so that the user looks like watching in the real environment.
In this embodiment, when a user wears the HMD device and opens a predetermined application program, for example, a live video application program, the HMD device may run corresponding virtual scenes, where the virtual scenes may be simulation environments for the real world, semi-simulation virtual scenes, or pure virtual scenes. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include a person, sky, land, sea, etc., where the land may include environmental elements such as a desert, a city, etc., and the user may control the virtual object to move in the virtual scene, and may also interactively control a control, a model, a presentation content, a person, etc. in the virtual scene by means of a manipulation device such as a handle device, a naked hand gesture, etc.
In order to fully utilize depth information in a virtual reality space and improve interaction experience when a user watches videos, the embodiment of the disclosure provides a model display method based on the virtual reality space.
The method is described below in connection with specific examples.
Fig. 2 is a flow chart of a model display method based on a virtual reality space according to an embodiment of the disclosure, where the method may be performed by a model display device based on a virtual reality space, and the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 2, the method includes:
in step 201, in response to receiving a model selection operation on a gift control model in a first layer, a gift selection panel including a plurality of candidate gift models is displayed in the first layer, where the first layer is a layer with a minimum display distance from a current viewing position of a user among a plurality of layers corresponding to a currently played virtual reality video.
It will be appreciated that in the real world, a live video stream, when played, displays a wide variety of information on the video interface, such as, for example, bullet screen information, gift information, etc. In one embodiment of the disclosure, depth information in a virtual reality space is fully utilized, and different information is split and displayed in different layers according to scene requirements, so that a hierarchical display effect is realized.
In one embodiment of the present disclosure, a layer type of a plurality of layers corresponding to a currently played virtual reality video is obtained, where the layer type may be calibrated according to a scene, including but not limited to multiple of an operation user interface layer, an information flow display layer, a gift display layer, an expression display layer, etc., where each layer type may include at least one sub-layer, etc., such as for the operation user interface layer may include a manipulation layout layer, etc., which are not listed here.
In one embodiment of the present disclosure, a plurality of spatial location areas of a plurality of layers in a virtual reality space are determined according to the layer types, where each spatial location area has a different display distance from a current viewing location of a user, so that the visually different layer types are different from the user in distance, and the user usually notices the content on the layer closest to the layer first, so that according to the scene needs, the hierarchical structure between the layers can be used to display unused information, and the viewing experience of the user is improved.
It should be noted that, the vertical coordinate values of the spatial position areas corresponding to the multiple layers on the vertical axis are different, so as to realize "far and near" staggered display in the visual sense of the user, in addition, in order to avoid display shielding between the layers, the coordinate values of the spatial position areas corresponding to the horizontal coordinate values may not be identical (i.e. the front and rear layers include a layer portion that is not overlapped in the X-axis direction), i.e. the multiple layers are arranged in the horizontal direction in a staggered manner, or the coordinate values of the spatial position areas corresponding to the vertical coordinate values may not be identical (i.e. the front and rear layers include a layer portion that is not overlapped in the Y-axis direction), i.e. the multiple layers may be arranged in a staggered manner in the vertical direction, or the multiple layers may be arranged in the horizontal direction and the vertical direction at the same time.
For example, as shown in fig. 3, when the application scene is a playing scene of a live video stream, the spatial position display areas of the corresponding layers are sequentially arranged in front of the virtual video frame currently played, the display distances between the different layers from the current viewing position of the user are different, when the user views in the virtual reality space, the layer with the closer display distance is obviously closer to the eyes of the user, and the user can notice more easily, for example, by using the hierarchical structure relationship to set the layers of some operation interface types at the position closest to the eyes of the user, the operation of the user can be obviously more convenient.
With continued reference to fig. 3, in order to avoid front-to-back occlusion between layers, horizontal coordinate values of spatial position areas corresponding to multiple layers are not identical, and vertical coordinate values of spatial position areas corresponding to multiple layers are not identical, so that a viewing user can see layer messages on multiple layers in front of a virtual screen.
It should be emphasized that the hierarchical structure between the layers is different from the viewing user only in attention sensitivity to different layers, and the viewing user cannot visually see the hierarchical structure relationship between the spatial position areas of different layers, and only can feel the distance between the different layers on the content, and the like, and the attention to the information on the layer closest to the user is promoted because the user tends to pay attention to the information closer to the user.
In order to fully utilize depth information in the virtual display space, layer priority of each layer can be determined based on scene needs, spatial position areas of different layers are determined based on the level of the priority, and layer messages are divided into different layers to be displayed according to indexes such as importance of the messages in the scene, so that viewing experience of a user is further improved.
In one embodiment of the present disclosure, determining a plurality of spatial location areas of a plurality of layers in a virtual reality space according to a layer type includes: priority information of the layers is determined according to the layer types, and then a plurality of spatial position areas of the layers in the virtual reality space are determined according to the priority information.
The higher the priority information is, the more the layer message displayed in the corresponding layer needs to draw the attention of the user, and therefore, the closer the corresponding target space region is to the viewing position of the user. The priority information corresponding to the layer type can be calibrated according to the scene requirement, and the priority information corresponding to the same layer type in different scenes can be different.
In the actual execution process, in different application scenarios, the manner of determining the multiple spatial location areas of the multiple layers in the virtual reality space according to the priority information is different, and the following is exemplified:
In some possible embodiments, considering that the virtual video in the virtual scene is relatively fixedly displayed in the virtual scene, the spatial location areas of the corresponding layers are also relatively fixedly set, and the spatial location areas corresponding to the different priority information are stored in the preset database, so that the preset database is queried according to the priority information to obtain a plurality of spatial location areas corresponding to the plurality of layers.
In some possible embodiments, the front of the set video display position of the corresponding layer is considered, so in this embodiment, the video display position of the virtual reality video in the virtual reality space is determined, where the video display position may be determined according to the set canvas position of the display video, and further, with the video display position as a starting point, the spatial position area of each layer is determined one by one according to the order from low to high of the preset distance interval and the priority information, so as to determine a plurality of spatial position areas of a plurality of layers. The preset distance interval can be calibrated according to experimental data.
In some possible embodiments, considering that the set video display position of the corresponding layer views the direction of the user's line of sight of the user, in this embodiment, the target layer corresponding to the highest priority is determined according to the priority information, and the spatial position area of the target layer in the virtual reality space is determined, where the determination manner of the spatial position area of the target layer in the virtual reality space is different in different application scenarios;
In some possible implementations, the distance may be determined to be a spatial location area at a preset distance threshold from the current viewing location of the user in the direction of the user's gaze; in some possible implementations, a total distance between a current viewing position of a user and a video display position of a virtual reality video in a virtual reality space may be determined, and a preset proportion threshold of the total distance is determined to be a spatial position area with the video display position as a starting point, so that a displayed layer is prevented from being too close to the viewing user, and information viewed in a viewing angle range of the user is limited.
Further, after determining the spatial position area of the target layer, determining the spatial position area of each other layer one by taking the spatial position area as a starting point according to the sequence from high to low of the preset distance interval and the priority information to the direction far away from the current watching position of the user, so as to determine a plurality of spatial position areas of a plurality of layers. The preset distance interval may be the same as or different from the preset distance interval in the above embodiment, and may be specifically set according to the scene requirement.
The plurality of layers comprises a first layer, wherein the gift control model is displayed on a spatial position area corresponding to the first layer. In order to facilitate the user to perform the interactive operation, the first layer may be the layer with the highest priority, that is, the first layer is the layer with the smallest display distance from the current viewing position of the user.
In one embodiment of the present disclosure, the gift control model is displayed in a first layer that is closest to the user, thus facilitating the user to perform the relevant interaction.
In this embodiment, in response to receiving a model selection operation on a gift control model in a first layer, a gift selection panel including a plurality of candidate gift models is displayed in the first layer, where each candidate gift model corresponds to one gift model, and a specific style of the candidate gift model and the like may be calibrated according to scene needs.
Step 202, in response to receiving a gift display operation for a target gift model of a plurality of candidate gift models, displaying a model animation corresponding to the target gift model in a virtual reality space.
In one embodiment of the present disclosure, a gift display operation for a target gift model of a plurality of candidate gift models may be received, a model animation corresponding to the target gift model is displayed in a virtual reality space, in some possible embodiments, the corresponding target gift model may be selected first by means of a virtual control device or a voice or gesture operation, and further, if a gift display operation for the target gift model is obtained, the model animation corresponding to the target gift model is displayed in the virtual display space. The gift display operation may be implemented by triggering a related control on the virtual control device, or may be implemented after detecting that the user includes voice information corresponding to the keyword, which is not listed herein.
Therefore, the method is convenient for realizing the sending interaction of the gifts in the virtual reality control, and fully utilizes the depth information in the virtual reality space on the basis of simulating the sending interaction of the gifts in the real world, thereby improving the intelligent experience of interaction.
It will be appreciated that, in order to further fully utilize depth information in the virtual reality space, the model animation style of the target gift model may also be more diversified, for example, may be a 3D animation, and in one embodiment of the present disclosure, when displaying the corresponding model animation, different display positions may also be selected by utilizing depth characteristics in the virtual display world.
In some possible embodiments, a model type of the target gift model may be identified, a second layer matching the model type may be determined among the plurality of layers, and a model animation corresponding to the target gift model may be displayed in the second layer. For example, when the multiple layers include a half-screen gift layer and a full-screen gift layer, if the model type of the target gift model is full-screen gift, displaying the corresponding model animation on the full-screen gift layer, and displaying the model animation in a mode of covering the full screen, so as to further improve the display effect of the model animation.
In some possible embodiments, after receiving the gift display operation on the target gift model in the multiple candidate gift models, the display position trigger operation of the user in the virtual reality space is further acquired, and the model animation is displayed at the position corresponding to the display position trigger operation, so that the model animation is displayed at any position in the virtual reality space, and the interestingness of the gift sending interaction is further improved. The display position triggering operation may be input through voice operation or determined through gesture information, for example, a hand image of a user's hand is taken, when the hand gesture of the user is detected to be a preset display hand gesture, further, the position of the user's hand is determined to be the display position of the model animation, and the like. For another example, a trigger direction indication model may be rendered in the virtual reality space in a control direction corresponding to the virtual control device (such as a handle), where the trigger direction indication model is used to indicate a trigger direction corresponding to the control direction in the virtual reality space, and the trigger direction corresponds to the control direction in real time. When the trigger of a preset display position determining control on the virtual control equipment is detected, determining that the trigger position corresponding to the current trigger direction is the display position of the model animation, and the like.
In summary, in the model display method based on the virtual reality space according to the embodiment of the disclosure, in response to receiving a model selection operation on a gift control model in a first layer, a gift selection panel including a plurality of candidate gift models is displayed in the first layer, where the first layer is a layer with a minimum display distance from a current viewing position of a user among a plurality of layers corresponding to a currently played virtual reality video, and further, in response to receiving a gift display operation on a target gift model in the plurality of candidate gift models, a model animation corresponding to the target gift model is displayed in the virtual reality space. Therefore, interaction of gift sending in the virtual reality space is achieved, depth information of the virtual reality space is fully utilized, VR display effect of gift animation is achieved, and interaction experience of watching video of a user is improved.
Based on the above embodiment, in different application scenarios, the manner in which the user performs the related operations (such as the above model selection operation, gift display operation, etc.) is different, and the following is described in connection with specific examples:
in one embodiment of the disclosure, a trigger direction indication model is rendered in a virtual reality space according to a manipulation direction corresponding to a virtual manipulation device, where the trigger direction indication model is used for indicating a trigger direction corresponding to the manipulation direction in the virtual reality space, and the trigger direction corresponds to the manipulation direction in real time.
In the virtual reality scene, the user selects the gift control model through a control device, the virtual control device can be a handle, and the user selects the gift control model through the operation of a button of the control device. Of course, in another embodiment, instead of using the manipulation device, the gift control model in the HMD device may be selected by using a multi-modal control manner such as gesture or voice or a combination of gesture and voice. In this embodiment, the user may adjust the manipulation direction by rotating the angle of the related control on the manipulation device, or may also adjust the manipulation direction by rotating the angle of the manipulation device, etc.
Different from the real world, the user can not trigger the display screen directly in the virtual reality space, so, in order to intuitively guide the triggering operation of the user, a triggering direction indication model is rendered in the virtual reality space according to the corresponding control direction of the control device, wherein the triggering direction indication model is used for indicating the corresponding triggering direction of the control direction in the virtual reality space, and the triggering direction corresponds to the control direction in real time, so that the user can intuitively know the current triggering direction of the virtual control device in the virtual reality space.
In an embodiment of the present disclosure, a manipulation direction adjustment instruction sent by a virtual manipulation device may be received, where the manipulation direction adjustment instruction may be triggered by adjusting a manipulation direction by the above-mentioned rotation angle of a related control on the manipulation device, or may also be triggered by adjusting a manipulation direction by rotating an angle of the manipulation device.
Further, determining a manipulation direction corresponding to the manipulation apparatus according to the manipulation direction adjustment instruction, for example, determining a manipulation direction corresponding to the rotation angle, and the like, and determining a trigger direction corresponding to the trigger direction indication model according to the manipulation direction, for example, in some possible embodiments, determining the manipulation direction as the corresponding trigger direction; for another example, in some possible embodiments, the preset mapping relationship is queried to obtain the trigger direction corresponding to the manipulation direction.
The rendering trigger direction indication model may be set according to a scene, and the rendering trigger direction indication model may visually indicate a trigger direction corresponding to a manipulation direction in a virtual reality space, and since an active operation in the virtual reality space is a "floating" operation, the rendering trigger direction indication model is generally a model having an "extension" sense.
For example, as shown in fig. 4, the trigger direction indication model is a ray track model, wherein a starting point of the ray track model is a spatial position corresponding to the virtual control device in the virtual reality space, the ray track model extends from the starting point of the track to the beginning of the track to the virtual reality space, and the extending direction is a trigger direction corresponding to the track. When the track end point of the ray track model is positioned in the gift control model, the corresponding gift control model is in a selected state.
In other alternatives, the ray trace model may also be extended to any model that is rendered in "ray" logic, such as, for example, a "pointer" model.
Further, when the gift control model is located in the triggering direction of the operation direction indication model, the receiving of the preset model selection operation is responded, wherein the preset model selection operation can be implemented for a preset control for triggering the virtual control device, can be implemented through voice input, and the like.
In one embodiment of the present disclosure, in order to further enhance the visual experience of the operation, after receiving the model selection operation on the gift control model in the first layer, the gift control model is further displayed as a preset control selected state, where in different embodiments, the manner in which the gift control model is displayed as the preset control selected state is different, in some possible embodiments, as shown in fig. 5, when the trajectory end point of the ray trajectory model is located on the gift control model, the gift control model may be enlarged according to a preset proportion to indicate that the gift control model is in the preset control selected state, and so on; in some possible embodiments, the gift control model may also be highlighted to indicate that the gift control model is in a preset control selected state, and so on.
In one embodiment of the present disclosure, after displaying a plurality of candidate gift models, in response to receiving a gift display operation for a target gift model of the plurality of candidate gift models, comprising: when the target gift model is located in the triggering direction of the operation direction indication model, responding to receiving a preset gift display operation, wherein the preset gift display operation is used for indicating that a user has a sending requirement for displaying the corresponding target gift model animation, and the gift display operation can be implemented by a preset control for triggering the virtual control equipment or through voice input.
In this embodiment, before receiving the gift display operation on the target gift model of the multiple candidate gift models, the currently selected target gift model may be further determined, and the target gift model may be displayed as the selected state, for example, the target gift model may be displayed in an enlarged scale to indicate that the target gift model is in the preset selected state, and so on.
In one embodiment of the present disclosure, in a case where the target gift model is of a preset burst type, displaying a countdown animation corresponding to the target gift model according to a preset countdown duration, where the countdown animation is used to prompt the remaining countdown duration, where the countdown duration corresponding to the countdown animation may be calibrated according to a scene requirement, an animation style of the countdown animation (for example, may be a prompt progress bar style surrounding the target gift model, etc.), and a display position of the countdown animation may be calibrated according to the scene requirement.
When the remaining countdown time length is greater than or equal to 0, the target gift model is still selected at the moment, the remaining countdown time length is updated to the countdown time length in response to receiving a preset gift display operation on the target gift model, and model animation corresponding to the target gift model is displayed in the virtual reality space.
In one embodiment of the present disclosure, when the countdown duration is equal to or less than 0, the preset gift display operation of the target gift model is still not received, and then the displaying of the countdown animation is stopped and the selection of the target gift model is released, where the target gift model is not in the selected state.
That is, in this embodiment, the animation of the target gift model of the preset type of continuous transmission may be repeatedly played in a continuous transmission manner, so as to achieve the continuous transmission effect of the gift. In some possible embodiments, when the model animation corresponding to the target gift model is displayed in the virtual reality space, the current total display times of the target gift model may also be displayed at the associated display position corresponding to the target gift model.
For example, when the model animation corresponding to the target gift model is a continuously transmittable "heart" animation, as shown in fig. 6A, the current total display time information "x1" is displayed after the target gift model is the "heart" animation, and the countdown progress bar in the form of the arrow track is displayed around the target gift model, when the filling progress of the countdown progress bar is less than 100% (when the arrow track is closed to be a rectangular frame, the filling progress is considered to be 100%), the remaining time of the countdown is considered to be greater than 0, when the remaining countdown time is equal to or greater than 0, as shown in fig. 6B, if the preset gift display operation on the target gift model is received, the filling progress of the countdown progress bar is displayed to be 0% to start the countdown of the next round, and the "heart" animation corresponding to the target gift model is displayed in the virtual reality space, and the current total display time information "x2" is displayed after the target gift model is the "heart", wherein the current total display time information of the target gift model is also displayed on the current display layer of the current display time information of the current display layer of the display time information of the target gift model is displayed on the target gift.
In summary, the model display method based on the virtual reality space in the embodiment of the disclosure can flexibly select an interactive operation mode in the virtual reality space when gift interaction is performed, so that the flexibility and the interestingness of operation are further improved.
In order to implement the above embodiment, the present disclosure further proposes a model display device based on a virtual reality space.
Fig. 7 is a schematic structural diagram of a model display device based on a virtual reality space according to an embodiment of the disclosure, where the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device to perform model display based on the virtual reality space. As shown in fig. 7, the apparatus includes: a first display module 710 and a second display module 720, wherein,
a first display module 710 for displaying a gift selection panel including a plurality of candidate gift models in a first layer in response to receiving a model selection operation for the gift control model in the first layer, wherein,
the first layer is a layer with the smallest display distance with the current watching position of the user in a plurality of layers corresponding to the currently played virtual reality video;
and a second display module 720 for displaying a model animation corresponding to the target gift model in the virtual reality space in response to receiving a gift display operation for the target gift model among the plurality of candidate gift models.
The model display device based on the virtual reality space provided by the embodiment of the disclosure may execute the model display method based on the virtual reality space provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method, and the implementation principle is similar and will not be repeated here.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/operation which, when executed by a processor, implements the virtual reality space based model display method in the above embodiments.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Referring now in particular to fig. 8, a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 800 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 800 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to programs stored in a Read Only Memory (ROM) 802 or programs loaded from a memory 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; memory 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 809, or from memory 808, or from ROM 802. The above-described functions defined in the virtual reality space based model display method of the presently disclosed embodiments are performed when the computer program is executed by the processor 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an operating execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. The computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an operation execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
and displaying a gift selection panel containing a plurality of candidate gift models in the first layer in response to receiving a model selection operation of the gift control model in the first layer, wherein the first layer is a layer with the smallest display distance from the current watching position of the user in a plurality of layers corresponding to the currently played virtual reality video, and further displaying model animation corresponding to the target gift model in the virtual reality space in response to receiving a gift display operation of the target gift model in the plurality of candidate gift models. Therefore, interaction of gift sending in the virtual reality space is achieved, depth information of the virtual reality space is fully utilized, VR display effect of gift animation is achieved, and interaction experience of watching video of a user is improved.
The electronic device may write computer program code for performing the operations of the present disclosure in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable operations for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer operations.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an operating execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (12)

1. The model display method based on the virtual reality space is characterized by comprising the following steps of:
in response to receiving a model selection operation for a gift control model in a first layer, displaying a gift selection panel in the first layer that includes a plurality of candidate gift models, wherein,
the first layer is a layer with the smallest display distance with the current watching position of the user in a plurality of layers corresponding to the currently played virtual reality video;
in response to receiving a gift display operation for a target gift model of the plurality of candidate gift models, a model animation corresponding to the target gift model is displayed in a virtual reality space.
2. The method of claim 1, further comprising, prior to said responding to receiving a model selection operation for a gift control model in a first layer:
Acquiring layer types of a plurality of layers corresponding to a currently played virtual reality video, wherein the plurality of layers comprise a first layer;
determining a plurality of spatial position areas of the layers in a virtual reality space according to the layer types, wherein the display distances between the spatial position areas and the current watching position of a user are different;
and displaying the gift control model on a spatial position area corresponding to the first layer.
3. The method of claim 1, comprising, prior to said responding to receiving a model selection operation for a gift control model in a first layer:
rendering a trigger direction indication model in a virtual reality space according to a control direction corresponding to the virtual control device, wherein,
the trigger direction indication model is used for indicating a trigger direction corresponding to the control direction in the virtual reality space, and the trigger direction corresponds to the control direction in real time.
4. The method of claim 3, wherein the responding to receiving a model selection operation for the gift control model in the first layer comprises:
and when the gift control model is positioned in the triggering direction of the operation direction indication model, responding to the receiving of the operation selected by the preset model.
5. The method of claim 1, further comprising, after receiving a model selection operation for a gift control model in the first layer:
and displaying the gift control model as a preset control selected state.
6. The method of claim 3, wherein the responding to receiving a gift display operation for a target gift model of the plurality of candidate gift models comprises:
and when the target gift model is positioned in the triggering direction of the operation direction indication model, responding to receiving a preset gift display operation.
7. The method of claim 6, further comprising, after said responding to receiving a gift display operation for a target gift model of said plurality of candidate gift models:
displaying a countdown animation corresponding to the target gift model according to a preset countdown time length under the condition that the target gift model is of a preset continuous sending type, wherein the countdown animation is used for prompting the remaining countdown time length;
and when the remaining countdown time length is greater than or equal to 0, in response to receiving a preset gift display operation on the target gift model, updating the remaining countdown time length to the countdown time length, and displaying a model animation corresponding to the target gift model in the virtual reality space.
8. The method of claim 7, wherein when displaying a model animation corresponding to the target gift model in the virtual reality space, further comprising:
and displaying the current total display frequency information of the target gift model at the associated display position corresponding to the target gift model.
9. The method of claim 1 or 2, wherein the displaying of the model animation corresponding to the target gift model in the virtual reality space comprises:
identifying a model type of the target gift model;
and determining a second layer matched with the model type in the multiple layers, and displaying a model animation corresponding to the target gift model in a space position area corresponding to the second layer.
10. A model display device based on a virtual reality space, comprising:
a first display module for displaying a gift selection panel including a plurality of candidate gift models in a first layer in response to receiving a model selection operation of the gift control models in the first layer,
the first layer is a layer with the smallest display distance with the current watching position of the user in a plurality of layers corresponding to the currently played virtual reality video;
And a second display module for displaying a model animation corresponding to a target gift model among the plurality of candidate gift models in a virtual reality space in response to receiving a gift display operation of the target gift model.
11. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable operations;
the processor is configured to read the executable operations from the memory and execute the executable operations to implement the virtual reality space based model display method of any of claims 1-9.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for executing the virtual reality space based model display method of any of the preceding claims 1-9.
CN202210993895.8A 2022-08-18 2022-08-18 Model display method, device, equipment and medium based on virtual reality space Pending CN117641025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210993895.8A CN117641025A (en) 2022-08-18 2022-08-18 Model display method, device, equipment and medium based on virtual reality space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210993895.8A CN117641025A (en) 2022-08-18 2022-08-18 Model display method, device, equipment and medium based on virtual reality space

Publications (1)

Publication Number Publication Date
CN117641025A true CN117641025A (en) 2024-03-01

Family

ID=90015224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210993895.8A Pending CN117641025A (en) 2022-08-18 2022-08-18 Model display method, device, equipment and medium based on virtual reality space

Country Status (1)

Country Link
CN (1) CN117641025A (en)

Similar Documents

Publication Publication Date Title
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
US20240028130A1 (en) Object movement control method, apparatus, and device
CN117641025A (en) Model display method, device, equipment and medium based on virtual reality space
CN117641026A (en) Model display method, device, equipment and medium based on virtual reality space
CN117632063A (en) Display processing method, device, equipment and medium based on virtual reality space
CN117631810A (en) Operation processing method, device, equipment and medium based on virtual reality space
CN117765207A (en) Virtual interface display method, device, equipment and medium
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
CN117572994A (en) Virtual object display processing method, device, equipment and medium
CN117640919A (en) Picture display method, device, equipment and medium based on virtual reality space
CN117632391A (en) Application control method, device, equipment and medium based on virtual reality space
US20240046588A1 (en) Virtual reality-based control method, apparatus, terminal, and storage medium
WO2024131405A1 (en) Object movement control method and apparatus, device, and medium
CN117636528A (en) Voting processing method, device, equipment and medium based on virtual reality space
CN117991889A (en) Information interaction method, device, electronic equipment and storage medium
CN117075770A (en) Interaction control method and device based on augmented reality, electronic equipment and storage medium
CN117354484A (en) Shooting processing method, device, equipment and medium based on virtual reality
CN118484078A (en) Virtual resource processing method, device, equipment and medium based on virtual reality
CN118229921A (en) Image display method, device, electronic equipment and storage medium
CN117435040A (en) Information interaction method, device, electronic equipment and storage medium
CN117631904A (en) Information interaction method, device, electronic equipment and storage medium
CN117376591A (en) Scene switching processing method, device, equipment and medium based on virtual reality
CN117641040A (en) Video processing method, device, electronic equipment and storage medium
CN117197400A (en) Information interaction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination