CN116069974A - Virtual reality equipment and video playing method - Google Patents

Virtual reality equipment and video playing method Download PDF

Info

Publication number
CN116069974A
CN116069974A CN202111275376.XA CN202111275376A CN116069974A CN 116069974 A CN116069974 A CN 116069974A CN 202111275376 A CN202111275376 A CN 202111275376A CN 116069974 A CN116069974 A CN 116069974A
Authority
CN
China
Prior art keywords
rendering
model
video
virtual reality
webpage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111275376.XA
Other languages
Chinese (zh)
Inventor
吴金旺
温佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202111275376.XA priority Critical patent/CN116069974A/en
Publication of CN116069974A publication Critical patent/CN116069974A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Abstract

The application provides virtual reality equipment and a video playing method, when a user wears the virtual reality equipment to watch a webpage video, the virtual reality equipment can acquire a 3D model corresponding to the webpage video from a server, and synchronously render the corresponding 3D model between eyes of the user and a virtual user interface while playing the webpage video, so that the user can synchronously obtain watching experience of a 3D effect when watching the webpage video.

Description

Virtual reality equipment and video playing method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a video playing method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving an environmental immersion. A virtual reality device is a device that presents a virtual picture to a user using virtual display technology. Virtual reality devices typically utilize VR browsers to play video, which in turn needs to be displayed on the VR browser's web page. In virtual reality devices, VR browsers are an important network video playback tool. As long as the VR effect is supported by the web page, the VR browser can display the video on the web page.
Currently, users typically experience only rectangular 3D or flat large screen experiences when viewing video using VR browsers in virtual reality devices. Although some web page video bands have 3D effects, it is difficult for a user to see the specific object morphology and details expressed in the video, etc., because the resolution is not as good as a real 3D model. For example, when watching a space movie, objects such as a black hole and a space ship are arranged in a movie picture, and in the current virtual reality device, a user can only see plane objects in the movie picture displayed on a virtual user interface in a virtual reality mode, but cannot feel the entities of the black hole and the space ship. In such a case, the experience of the user perceiving reality using the virtual reality device is not obvious.
Disclosure of Invention
The application provides virtual reality equipment and a video playing method, which are used for solving the problem that the prior virtual reality equipment is difficult to provide better reality experience for users.
In a first aspect, the present application provides a virtual reality device, comprising: a display configured to display a virtual user interface; a controller configured to: acquiring a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model; the 3D model is displayed at the front end of the virtual user interface, and the configuration information is used for representing a rendering process for displaying the 3D model in the virtual reality device; in the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model; and if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model by using the configuration information.
When a user watches the webpage video by wearing the virtual reality device, the virtual reality device can acquire a 3D model corresponding to the webpage video from the server, and synchronously render the corresponding 3D model between eyes of the user and the virtual user interface while playing the webpage video, so that the user can synchronously obtain watching experience of a 3D effect when watching the webpage video.
In some implementations, the controller is further configured to: acquiring a video URL address corresponding to a webpage video to be played on a virtual user interface; acquiring a configuration file corresponding to the webpage video from a server according to the video URL address; the configuration file comprises configuration information of a plurality of 3D models corresponding to the webpage video and model URL addresses of the 3D models; acquiring a corresponding 3D model from the server according to the model URL address; different 3D models correspond to different playing progress on the webpage video.
In some implementations, the configuration information includes a rendering time, a rendering duration, a rendering start position, a rendering end position, and a type of animation to be rendered for the 3D model; and, the controller is further configured to: and in the process of playing the webpage video, if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model at the rendering starting position according to the animation type so that the 3D model is at the rendering ending position when the timing of the rendering duration is finished.
In some implementations, the controller is further configured to: after a video URL address corresponding to a webpage video to be played on a virtual user interface is obtained, sending a configuration request with the video URL address to a server; the configuration request is used for requesting the configuration file corresponding to the webpage video from the server; receiving a configuration file sent back by a server; and the configuration file is sent to the virtual reality equipment when the 3D model corresponding to the video URL address exists in the server.
In some implementations, the controller is further configured to: establishing a space rectangular coordinate system by taking an arc center of the virtual user interface as an origin; the x axis of the space rectangular coordinate system extends towards the right side of the user, the y axis extends towards the head of the user, and the z axis extends towards the virtual user interface; determining a start coordinate and an end coordinate of the rendering start position and the rendering end position in the space rectangular coordinate system; and rendering the 3D model according to the animation type at the starting coordinate from the rendering moment so that the 3D model is at the ending coordinate when the rendering duration timing is ended.
In a second aspect, the present application further provides a video playing method, including: acquiring a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model; the 3D model is displayed at the front end of the virtual user interface, and the configuration information is used for representing a rendering process for displaying the 3D model in the virtual reality device; in the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model; and if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model by using the configuration information.
The video playing method in the second aspect of the present application may be applied to the virtual reality device in the first aspect and specifically implemented by the controller in the virtual reality device, so that the beneficial effects of the video playing method in the second aspect are the same as the effective effects of the virtual display device in the first aspect, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 illustrates a display system architecture diagram including a virtual reality device, according to some embodiments;
FIG. 2 illustrates a VR scene global interface schematic in accordance with some embodiments;
FIG. 3 illustrates a recommended content region schematic diagram of a global interface, according to some embodiments;
FIG. 4 illustrates an application shortcut entry area schematic for a global interface in accordance with some embodiments;
FIG. 5 illustrates a suspension diagram of a global interface, according to some embodiments;
FIG. 6 illustrates a schematic diagram of a virtual reality device displaying web page video, according to some embodiments;
FIG. 7 illustrates a flow diagram of a method of playing web video in a virtual reality device, according to some embodiments;
FIG. 8 illustrates a schematic diagram of rendering a scene in accordance with some embodiments;
FIG. 9 illustrates another schematic diagram of rendering a scene in accordance with some embodiments;
FIG. 10 illustrates a schematic diagram of virtual reality device rendering a 3D model, according to some embodiments;
FIG. 11 illustrates a schematic diagram of a virtual reality device acquiring a 3D model, according to some embodiments;
FIG. 12 illustrates a schematic diagram of a virtual reality device rendering a 3D model according to a video playback schedule, in accordance with some embodiments;
FIG. 13 illustrates another schematic diagram of a virtual reality device rendering a 3D model according to a video playback schedule, in accordance with some embodiments;
FIG. 14 illustrates a schematic diagram of a spatial rectangular coordinate system established in rendering a scene, in accordance with some embodiments;
fig. 15 illustrates a schematic diagram of interactions between a virtual reality device and a server, according to some embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
Reference throughout this specification to "multiple embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic shown or described in connection with one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In this embodiment, the virtual reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including, but not limited to, VR glasses, augmented reality devices (Augmented Reality, AR), VR gaming devices, mobile computing devices, and other wearable computers. In some embodiments of the present application, VR glasses are taken as an example to describe a technical solution, and it should be understood that the provided technical solution may be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or be connected to other intelligent display devices as an external device, where the display device may be an intelligent tv, a computer, a tablet computer, a server, etc.
The virtual reality device 500 may display a media asset screen after being worn on the face of the user, providing close range images for both eyes of the user to bring an immersive experience. To present the asset screen, the virtual reality device 500 may include a plurality of components for displaying the screen and face wear. Taking VR glasses as an example, the virtual reality device 500 may include components such as a housing, a position fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and the like. In practical applications, the optical system, the display assembly, the gesture detection circuit and the interface circuit may be disposed in the housing, so as to be used for presenting a specific display screen; the two sides of the shell are connected with position fixing pieces so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to processing elements such as a controller, and the processing elements can adjust specific picture contents in the display assembly according to the detected gesture data.
As shown in fig. 1, in some embodiments, the virtual reality device 500 may be connected to the display device 200, and a network-based display system is constructed between the virtual reality device 500, the display device 200, and the server 400, and data interaction may be performed in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific screen content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display device 200 may provide a broadcast receiving tv function, and may additionally provide an intelligent network tv function of a computer supporting function, including, but not limited to, a network tv, an intelligent tv, an Internet Protocol Tv (IPTV), etc.
The display device 200 and the virtual reality device 500 also communicate data with the server 400 via a variety of communication means. The display device 200 and the virtual reality device 500 may be allowed to communicate via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
In the course of data interaction, the user may operate the display device 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 by a direct wireless connection or by a non-direct connection. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner of bluetooth, infrared, etc. When transmitting the control instruction, the mobile terminal 300 and the remote controller 100 may directly transmit the control instruction data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display device 200 through a wireless router to establish indirect connection communication with the display device 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as handles in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. To present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because of the content of the screen observed by the left and right eyes of the user, a display screen with a strong stereoscopic impression can be observed when the display screen is worn.
The optical system in the virtual reality device 500 is an optical module composed of a plurality of lenses. The optical system is arranged between the eyes of the user and the display screen, and the optical path can be increased through the refraction of the optical signals by the lens and the polarization effect of the polaroid on the lens, so that the content presented by the display component can be clearly presented in the visual field of the user. Meanwhile, in order to adapt to the vision condition of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance among the lenses is changed, and therefore the optical path is changed, and the picture definition is adjusted.
The interface circuit of the virtual reality device 500 may be used to transfer interaction data, and besides transferring gesture data and displaying content data, in practical application, the virtual reality device 500 may also be connected to other display devices or peripheral devices through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so that a displayed screen is output to the display device in real time for display. For another example, the virtual reality device 500 may also be connected to a handle via interface circuitry, which may be operated by a user in a hand, to perform related operations in the VR user interface.
Wherein the VR user interface can be presented as a plurality of different types of UI layouts depending on user operation. For example, the user interface may include a global interface, such as the global UI shown in fig. 2 after the AR/VR terminal is started, which may be displayed on a display screen of the AR/VR terminal or may be displayed on a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut entry area 3, and a hover area 4.
The recommended content area 1 is used for configuring TAB columns of different classifications; media resources, themes and the like can be selectively configured in the columns; the media assets may include 2D movies, educational courses, travel, 3D, 360 degree panoramas, live broadcasts, 4K movies, program applications, games, travel, etc. services with media asset content, and the fields may select different template styles, may support simultaneous recommended programming of media assets and themes, as shown in fig. 3.
In some embodiments, the content recommendation area 1 may also include a main interface and a sub-interface. As shown in fig. 3, the portion located in the center of the UI layout is a main interface, and the portions located at both sides of the main interface are sub-interfaces. The main interface and the auxiliary interface can be used for respectively displaying different recommended contents. For example, according to the recommended type of the sheet source, the service of the 3D sheet source may be displayed on the main interface; and the left side sub-interface displays the business of the 2D film source, and the right side sub-interface displays the business of the full-scene film source.
Obviously, for the main interface and the auxiliary interface, different service contents can be displayed and simultaneously presented as different content layouts. And, the user can control the switching of the main interface and the auxiliary interface through specific interaction actions. For example, by controlling the focus mark to move left and right, the focus mark moves right when the focus mark is at the rightmost side of the main interface, the auxiliary interface at the right side can be controlled to be displayed at the central position of the UI layout, at this time, the main interface is switched to the service for displaying the full-view film source, and the auxiliary interface at the left side is switched to the service for displaying the 3D film source; and the right side sub-interface is switched to the service of displaying the 2D patch source.
In addition, in order to facilitate the user to watch, the main interface and the auxiliary interface can be displayed respectively through different display effects. For example, the transparency of the secondary interface can be improved, so that the secondary interface obtains a blurring effect, and the primary interface is highlighted. The auxiliary interface can be set as gray effect, the main interface is kept as color effect, and the main interface is highlighted.
In some embodiments, the top of the recommended content area 1 may also be provided with a status bar, in which a plurality of display controls may be provided, including time, network connection status, power, and other common options. The content included in the status bar may be user-defined, e.g., weather, user avatar, etc., may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on a time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be displayed directly as specific time text information and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining situation of the virtual reality device 500.
The status bar is used to enable the user to perform a common control operation, so as to implement quick setting of the virtual reality device 500. Since the setup procedure for the virtual reality device 500 includes a number of items, all of the commonly used setup options cannot generally be displayed in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion options are selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further provided in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "shortcut center" option may be set in the expansion window. After clicking the shortcut center option, the user may display a shortcut center window by the virtual reality device 500. The shortcut center window can comprise screen capturing, screen recording and screen throwing options for respectively waking up corresponding functions.
The traffic class extension area 2 supports extension classes that configure different classes. And if the new service type exists, supporting configuration independent TAB, and displaying the corresponding page content. The service classification in the service classification expansion area 2 can also be subjected to sequencing adjustment and offline service operation. In some embodiments, the service class extension area 2 may include content: movie, education, travel, application, my. In some embodiments, the traffic class extension area 2 is configured to show large traffic classes TAB and support more classes configured, the icon of which supports the configuration as shown in fig. 3.
The application shortcut entry area 3 may specify that pre-installed applications, which may be specified as a plurality, are displayed in front for operational recommendation, supporting configuration of special icon styles to replace default icons. In some embodiments, the application shortcut entry area 3 further includes a left-hand movement control, a right-hand movement control for moving the options target, for selecting different icons, as shown in fig. 4.
The hover region 4 may be configured to be above the left diagonal side, or above the right diagonal side of the fixation region, may be configured as an alternate character, or may be configured as a jump link. For example, the suspension jumps to an application or displays a designated function page after receiving a confirmation operation, as shown in fig. 5. In some embodiments, the suspension may also be configured without jump links, purely for visual presentation.
In some embodiments, the global UI further includes a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the handheld controller selects the icon, the icon displays a text prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after selecting the search icon, the search icon will display the text "search" and the original icon, and after further clicking the icon or text, the search icon will jump to the search page; for another example, clicking on the favorites icon jumps to favorites TAB, clicking on the history icon defaults to locating the display history page, clicking on the search icon jumps to the global search page, clicking on the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral device, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a back button; the home key can realize the reset function by long-time pressing; volume up and down buttons; and the touch area can realize clicking, sliding and holding drag functions of the focus.
After the user operates on the VR user interface, the virtual reality device 500 may be controlled to display a certain video resource content. In the virtual reality device 500, a video asset is typically played with a VR browser, and the video asset is displayed on a web page of the VR browser, as shown in fig. 6, a web page including the video display area 5 may be displayed on the main interface in the content recommendation area 1. In other locations on the web page, video-related recommended content and some comments or video introduction content may be displayed.
In the virtual reality device 500, the VR browser is an important network video playing tool. As long as the VR effect is supported by the web page, the VR browser can display the video on the web page. Currently, users typically experience only rectangular 3D or flat large screen experiences when viewing video using VR browser in virtual reality device 500. Although some web page video bands have 3D effects, it is difficult for a user to see the specific object morphology and details expressed in the video, etc., because the resolution is not as good as a real 3D model.
For example, when watching a space movie, objects such as a black hole and a space ship are present in a movie screen, and in the present virtual reality apparatus 500, a user can only see planar objects in the movie screen displayed on a virtual user interface by means of virtual reality, but cannot feel the entities of the black hole and the space ship. In such a case, the experience of the user perceiving reality using the virtual reality device 500 is not obvious.
In order to solve the above problem that the virtual reality device 500 cannot provide a better reality experience for a user, in an embodiment of the present application, a virtual display device 500 is provided, which includes a display and a controller. Wherein the display may display a virtual user interface, VR user interface among the foregoing, etc. As shown in fig. 7, the controller may be configured to perform the steps of:
Step S101, obtaining a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model.
In order to improve the virtual reality experience of the user, different 3D model resources may be configured in advance for different web videos in the server 400. When the virtual reality device 500 needs to play a certain web video, a corresponding 3D model resource or the like may be acquired from the server 400.
In the virtual reality device 500, the 3D model is displayed in the front end of the virtual user interface, i.e., in the rendered scene between the user's eyes and the virtual user interface, as shown in fig. 8, a cube for representing the 3D model is displayed between the user's eyes and the virtual user interface. The configuration information of the 3D model is used to indicate the rendering process when the virtual reality device 500 displays the 3D model, and since the video is continuously played in the virtual reality device 500, the 3D model corresponding to the video is not stationary either, and it is necessary to change the position, shape, etc. of the video at any time, for example, as shown in fig. 9, as the video is played, the cube indicating the 3D model is moved from the position 6 to the position 7 at the front end of the virtual user interface. The process of 3D model display and change needs to be rendered by the virtual reality device 500, and in order to clearly render the change process of the 3D model, configuration information of the 3D model needs to be acquired when the 3D model is rendered in the virtual reality device 500.
The aforementioned rendering scene refers to a virtual scene constructed by a rendering engine of the virtual reality device 500 through a rendering program. For example, the virtual reality device 500 based on the units 3D rendering engine may construct a unit 3D scene when rendering a display. In a unit 3D scene, various virtual objects and functionality controls may be added to render a particular usage scene. For example, when playing multimedia resources, a display panel may be added in the unit 3D scene, where the display panel is used to present the multimedia resource picture. Meanwhile, virtual object models such as seats, sound equipment, people and the like can be added in the units 3D scene, so that cinema effect is created.
It should be noted that there may be several characters or objects in a video, such as trees, buildings, cars, etc., that can create a 3D model. In order to better show 3D effects in the video playing process, there are usually a plurality of 3D models to which one video can correspond, where different characters in the video can respectively correspond to different 3D models, and different objects can respectively correspond to different 3D models.
For example, if the video played in the virtual reality device 500 includes the automobile a, the airplane B, the tree C, and the character D, the 3D model corresponding to the video may include the 3D model a, the 3D model B, the 3D model C, and the 3D model D.
Therefore, in the embodiment of the present application, the 3D model that the virtual reality device 500 obtains from the server 400 when playing a web page video may not be only one, and accordingly, configuration information of the 3D model may be more than one. The 3D model will only start to be rendered when the video is played to a certain specific node moment.
Step S102, in the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model.
The rendering time of the 3D model is used to indicate when the virtual reality device 500 needs to begin rendering the 3D model. The rendering time of the 3D model is related to the playing progress of the web video, and the virtual reality device 500 can start rendering the corresponding 3D model only when a certain person or object appears in the web video.
For example, as shown in fig. 10, when an automobile a appears in the web video at time T1, the 3D model a corresponding to the automobile a needs to be rendered at time T1, and time T1 is the rendering time of the 3D model a. Or, when the plane B appears in the web video at the time T2, the 3D model B corresponding to the plane B needs to be rendered at the time T2, and the time T2 is the rendering time of the 3D model B.
Step S103, if the playing progress reaches the rendering moment, the 3D model is rendered by using the configuration information from the rendering moment while continuing to play the webpage video.
In the embodiment of the application, the rendering process of the 3D model is a process of dynamically displaying the change of the model, and the change of the model is synchronous with the change of the corresponding person or object in the web video. Therefore, the rendering of the 3D model needs to be synchronized with the playing process of the web video. Character D in the web video appears at time T3, disappears at time T4, and between times T3-T4, character D changes from one action to another. When rendering the 3D model D corresponding to the character D, the process of rendering the 3D model D is synchronized with the process of changing the character D in the web video because the process starts at the time T3, ends at the time T4, and the motion of rendering the 3D model D correspondingly between the time T3 and the time T4 is also required.
In this process, the configuration information of the 3D model D is a rendering process of the 3D model D from the time T3 to the time T4.
When a user views a web page video by wearing the virtual reality device 500 provided in the embodiment of the present application, the virtual reality device 500 may acquire a 3D model corresponding to the web page video from the server 400, and synchronously render the corresponding 3D model between eyes of the user and the virtual user interface while playing the web page video, so that the user can synchronously obtain a viewing experience of a 3D effect when viewing the web page video.
When the virtual reality device 500 acquires the 3D model from the server 400, the server 400 needs to determine exactly which video is the web page video to be played in the virtual reality device 500, so that it can more accurately determine which 3D model or models need to be provided to the virtual reality device 500. Generally, the web videos have their unique URL addresses corresponding to each other, so the virtual reality device 500 may send the video URL addresses of the web videos to be played to the server 400, and the server 400 determines the web videos to be played and the 3D model resources of the web videos to be played through the video URL addresses.
Thus, in some embodiments, as shown in fig. 11, the controller in the virtual reality device 500 may be further configured to perform the steps of:
step S201, obtaining a video URL address corresponding to a webpage video to be played on a virtual user interface.
Step S202, according to the URL address of the video, a configuration file corresponding to the webpage video is obtained from the server 400.
Typically, one web page video corresponds to one configuration file, and the configuration file includes configuration information of a plurality of 3D models corresponding to the web page video and model URL addresses of the 3D models.
After acquiring the configuration file, the virtual reality device 500 needs to parse the configuration file, thereby acquiring the model URL addresses and configuration information of the several 3D models.
In step S203, a corresponding 3D model is obtained from the server 400 according to the model URL address.
Because the configuration file does not include specific 3D model resources, the virtual reality device 500 further needs to acquire the corresponding 3D model from the server 400 according to each model URL address.
For example, after the virtual reality device 500 parses the configuration file to obtain the model URL addresses of the 5 3D models, the virtual reality device 500 also needs to obtain 5 corresponding 3D models from the server 400.
In some web videos, people or objects are limited, and then the 3D model is rendered sequentially during the playing process of the web videos. For example, as shown in fig. 12, in a 1-minute web video, when only the car a exists on the screen from 00 minutes to 10 seconds and only the plane B exists on the screen from 00 minutes to 10 seconds, the virtual reality device 500 may render the 3D model a first and then render the 3D model B. In this case, the 3D models in the web video correspond to different playing progress on the web video, respectively.
However, in some web videos containing relatively large content, it is likely that two or more people and objects appear simultaneously within a certain period of time. In this case, the virtual reality device 500 needs to render multiple 3D models simultaneously. For example, as shown in fig. 13, in a 1-minute web video, when an automobile a exists on a screen from 00 minutes 00 seconds to 00 minutes 50 seconds, when an airplane B exists on a screen from 00 minutes 10 seconds to 00 minutes 40 seconds, and when a character D exists on a screen from 00 minutes 30 seconds to 00 minutes 50 seconds, the virtual reality apparatus 500 needs to render a 3D model a from 00 minutes 00 seconds to 00 minutes 10 seconds, render a 3D model a and a 3D model B from 00 minutes 10 seconds to 00 minutes 30 seconds, render a 3D model a, a 3D model B and a 3D model D from 00 minutes 30 seconds to 00 minutes 40 seconds, and render a 3D model a and a 3D model D from 00 minutes 40 seconds to 00 minutes 50 seconds. In this case, some 3D models in the web video correspond to different playing schedules on the web video, and some 3D models correspond to the same playing schedule on the web video.
In addition, since the web page video is displayed on the web page, the video URL in the embodiment of the present application may also represent the web page URL address of the web page where the video is located.
In order to enable the virtual reality device 500 to clearly render the 3D model and the variation of the 3D model, configuration information of the 3D model may generally include a rendering time, a rendering start position, a rendering end position, a type of animation to be rendered, and the like of the 3D model.
The animation type may also be referred to as a change type of the 3D model, and may include, but is not limited to, rotation, linear movement of the 3D model, a change of shape and state caused by parameter adjustment, and the like. And, the animation type may adopt one or more of the foregoing types with reference to actual changes of the corresponding person or object in the web video.
It will be appreciated that the process of rendering a 3D model according to the animation type in the virtual reality device 500 is a process of simulating the actual change of a person or object in a web page video.
The rendering start position and the rendering end position in the configuration information are both in a rendering scene between the user eyes and the virtual user interface. After the virtual reality device 500 acquires the 3D model and the configuration information, the controller may be configured to perform the steps of:
In the process of playing the webpage video, if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model at the rendering starting position according to the animation type so that the 3D model is at the rendering ending position when the timing of the rendering duration is finished.
In some embodiments, if the position of the person or object in the web video relative to the eyes of the user is not changed, the position of the 3D model corresponding to the person or object between the user and the virtual user interface may not be changed, and thus the rendering start position and the rendering end position may be the same.
To more clearly represent the rendering start and end positions, in some embodiments, a spatial rectangular coordinate system may also be established within the space in which the virtual user interface is located. To enhance the user's experience with the virtual reality device 500, the virtual user interface provided by the virtual reality device 500 is generally arcuate. Then in the space where the virtual user interface is located, a space rectangular coordinate system can be established by using the arc center of the arc virtual user interface as an origin, see fig. 14, wherein x can be extended towards the right side of the user, y can be extended towards the head of the user, and z can be extended towards the virtual user interface.
In rendering the 3D model, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S301, a space rectangular coordinate system is established with the arc center of the virtual user interface as the origin.
Taking the space rectangular coordinate system in fig. 14 as an example, x of the space rectangular coordinate system extends in the right direction of the user, y extends in the head direction of the user, and z extends in the direction of the virtual user interface.
Step S302, determining the start coordinates and the end coordinates of the rendering start position and the rendering end position in the space rectangular coordinate system.
After the space rectangular coordinate system is established, all positions in the space can be represented by space three-dimensional coordinates, wherein the starting coordinates of the rendering start positions can be (x 1, y1, z 1), and the ending coordinates of the rendering end positions can be (x 2, y2, z 2). In the rendering process of the 3D model, the motion track of the 3D model is the track for moving the 3D model from (x 1, y1, z 1) to (x 2, y2, z 2).
Step S303, starting from the rendering moment, rendering the 3D model according to the animation type at the starting coordinate, so that the 3D model is at the ending coordinate when the rendering duration timing is ended.
In addition, in some embodiments, in order to output the rendered picture, the virtual reality device 500 may also set a virtual camera in the aforementioned units 3D scene. For example, the virtual reality device 500 may set a left-eye camera and a right-eye camera in a unit 3D scene according to a positional relationship of both eyes of a user, and the two virtual cameras may simultaneously photograph objects in the unit 3D scene, thereby respectively outputting rendering pictures to a left display and a right display. In order to obtain better immersion experience, the angles of the two virtual cameras in the unit 3D scene can be adjusted in real time along with the gesture sensor of the virtual reality device 500, so that when a user wears the virtual reality device 500 to act, rendering pictures in the unit 3D scene at different viewing angles can be output in real time.
In the virtual reality device with virtual camera 500, a spatial direct coordinate system may be established by using the position of any one of the virtual cameras as an origin, and directions of x axis, y axis and z axis in the established spatial rectangular coordinate system may be directions extending in the right direction of the x axis direction of the user, directions extending in the head direction of the y axis direction of the user, and directions extending in the z axis direction of the virtual user interface.
Alternatively, in some embodiments, the directions of the x, y, and/or z axes of the space rectangular coordinate system may also extend in the opposite direction in FIG. 14.
In the foregoing embodiments, the virtual reality device 500 may obtain a 3D model corresponding to a web page video to be played from the server 400. However, not all web videos have corresponding 3D models, and the server 400 can send a configuration file to the virtual device 500 only after determining that the web video has a corresponding 3D model configuration file. In some embodiments, as shown in fig. 15, the virtual reality device 500 may first send a configuration request related to the web page video to the server 400 after acquiring the video URL address of the web page video. The configuration request includes a number of request parameters, such as a video URL address of the web page video, a requested profile type, and the like. Upon receiving the configuration request, the server 400 determines whether 3D model resources and data associated with the video URL address exist in the server 400, and if so, transmits a configuration file associated with the associated 3D model resources and data to the virtual reality device 500.
In the configuration file sent to the virtual reality device 500 by the server 400, an array of a plurality of 3 models corresponding to the web video is included, a plurality of 3D model configuration units are stored in the array, and each configuration unit includes a video URL address of one 3D model and configuration information of the 3D model.
If the 3D model resource and the data corresponding to the web page video do not exist in the server 400, the server 400 also notifies the virtual reality device 500 that no 3D model is currently available, and the virtual reality device 500 starts to play the web page video normally and does not render the 3D model, and at this time, the user wears the virtual reality device 500 to see the ordinary VR effect.
In such a case as described above, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S401, after obtaining the video URL address corresponding to the web page video to be played on the virtual user interface, a configuration request with the video URL address is sent to the server 400.
In step S402, in the case that the 3D model corresponding to the web page video exists in the server 400, the configuration file sent back by the server 400 is received.
In order to solve the problem that the virtual reality device 500 in the foregoing embodiment cannot provide a better reality experience for a user, a video playing method is further provided in the embodiments of the present application, and the method can be applied to the virtual reality device 500 in the foregoing embodiments and implemented by a controller of the virtual reality device 500. The method can specifically comprise the following steps:
Step S101, obtaining a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model. Wherein the 3D model is displayed at the front end of the virtual user interface, and the configuration information is used to represent a rendering process for displaying the 3D model in the virtual reality device 500.
Step S102, in the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model.
Step S103, if the playing progress reaches the rendering moment, the 3D model is rendered by using the configuration information from the rendering moment while continuing to play the webpage video.
As can be seen from the above technical solutions, in the video playing method provided in this embodiment, when a user views a certain web page video while wearing the virtual reality device 500, a 3D model corresponding to the web page video is obtained from the server 400. When the playing progress of the webpage video reaches the rendering moment of the 3D model, the 3D model is rendered according to the configuration information of the 3D model, and the webpage video is synchronously played, so that a user can synchronously obtain the watching experience of the 3D effect when watching the webpage video.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to display a virtual user interface;
a controller configured to:
acquiring a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model; the 3D model is displayed at the front end of the virtual user interface, and the configuration information is used for representing a rendering process for displaying the 3D model in the virtual reality device;
in the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model;
and if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model by using the configuration information.
2. The virtual reality device of claim 1, wherein the controller is further configured to:
acquiring a video URL address corresponding to a webpage video to be played on a virtual user interface;
acquiring a configuration file corresponding to the webpage video from a server according to the video URL address; the configuration file comprises configuration information of a plurality of 3D models corresponding to the webpage video and model URL addresses of the 3D models;
Acquiring a corresponding 3D model from the server according to the model URL address; different 3D models correspond to different playing progress on the webpage video.
3. The virtual reality device of any of claims 1-2, wherein the configuration information includes a rendering time, a rendering duration, a rendering start position, a rendering end position, and a type of animation rendered for the 3D model; and, the controller is further configured to:
and in the process of playing the webpage video, if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model at the rendering starting position according to the animation type so that the 3D model is at the rendering ending position when the timing of the rendering duration is finished.
4. The virtual reality device of claim 2, wherein the controller is further configured to:
after a video URL address corresponding to a webpage video to be played on a virtual user interface is obtained, sending a configuration request with the video URL address to a server; the configuration request is used for requesting the configuration file corresponding to the webpage video from the server;
Receiving a configuration file sent back by a server; and the configuration file is sent to the virtual reality equipment when the 3D model corresponding to the video URL address exists in the server.
5. The virtual reality device of claim 3, wherein the controller is further configured to:
establishing a space rectangular coordinate system by taking an arc center of the virtual user interface as an origin; the x axis of the space rectangular coordinate system extends towards the right side of the user, the y axis extends towards the head of the user, and the z axis extends towards the virtual user interface;
determining a start coordinate and an end coordinate of the rendering start position and the rendering end position in the space rectangular coordinate system;
and rendering the 3D model according to the animation type at the starting coordinate from the rendering moment so that the 3D model is at the ending coordinate when the rendering duration timing is ended.
6. A video playing method, the method comprising:
acquiring a 3D model corresponding to a webpage video to be played on a virtual user interface and configuration information of the 3D model; the 3D model is displayed at the front end of the virtual user interface, and the configuration information is used for representing a rendering process for displaying the 3D model in the virtual reality device;
In the process of playing the webpage video, determining whether the current playing progress of the webpage video reaches the rendering moment of the 3D model;
and if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model by using the configuration information.
7. The method according to claim 6, wherein the step of obtaining the 3D model corresponding to the web page video to be played on the virtual user interface and the configuration information of the 3D model includes:
acquiring a video URL address corresponding to a webpage video to be played on a virtual user interface;
acquiring a configuration file corresponding to the webpage video from a server according to the video URL address; the configuration file comprises configuration information of a plurality of 3D models corresponding to the webpage video and model URL addresses of the 3D models;
acquiring a corresponding 3D model from the server according to the model URL address; different 3D models correspond to different playing progress on the webpage video.
8. The method according to any one of claims 6-7, wherein the configuration information includes a rendering time, a rendering time length, a rendering start position, a rendering end position, and a type of animation to be rendered of the 3D model; and if the playing progress reaches the rendering time, starting from the rendering time while continuing to play the web video, and rendering the 3D model by using the configuration information, wherein the step of rendering the 3D model comprises:
And in the process of playing the webpage video, if the playing progress reaches the rendering moment, starting from the rendering moment while continuing to play the webpage video, and rendering the 3D model at the rendering starting position according to the animation type so that the 3D model is at the rendering ending position when the timing of the rendering duration is finished.
9. The method of claim 7, further comprising, after the obtaining the video URL address corresponding to the web page video to be played on the virtual user interface:
after a video URL address corresponding to a webpage video to be played on a virtual user interface is obtained, sending a configuration request with the video URL address to a server; the configuration request is used for requesting the configuration file corresponding to the webpage video from the server;
receiving a configuration file sent back by a server; and the configuration file is sent to the virtual reality equipment when the 3D model corresponding to the video URL address exists in the server.
10. The method of claim 8, wherein the step of rendering the 3D model according to the animation type at the rendering start position from the rendering time while continuing to play the web video such that the 3D model is at the rendering end position at the end of the rendering duration timer comprises:
Establishing a space rectangular coordinate system by taking an arc center of the virtual user interface as an origin; the x axis of the space rectangular coordinate system extends towards the right side of the user, the y axis extends towards the head of the user, and the z axis extends towards the virtual user interface;
determining a start coordinate and an end coordinate of the rendering start position and the rendering end position in the space rectangular coordinate system;
and rendering the 3D model according to the animation type at the starting coordinate from the rendering moment so that the 3D model is at the ending coordinate when the rendering duration timing is ended.
CN202111275376.XA 2021-10-29 2021-10-29 Virtual reality equipment and video playing method Pending CN116069974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111275376.XA CN116069974A (en) 2021-10-29 2021-10-29 Virtual reality equipment and video playing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111275376.XA CN116069974A (en) 2021-10-29 2021-10-29 Virtual reality equipment and video playing method

Publications (1)

Publication Number Publication Date
CN116069974A true CN116069974A (en) 2023-05-05

Family

ID=86178927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111275376.XA Pending CN116069974A (en) 2021-10-29 2021-10-29 Virtual reality equipment and video playing method

Country Status (1)

Country Link
CN (1) CN116069974A (en)

Similar Documents

Publication Publication Date Title
US9824498B2 (en) Scanning display system in head-mounted display for virtual reality
RU2621644C2 (en) World of mass simultaneous remote digital presence
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
KR101839122B1 (en) A multi-access virtual space provisioning system linked with a multi-user real space location system and a method for providing 3D virtual reality images in a multiple access virtual space
TW201741853A (en) Virtual reality real-time navigation method and system through which a narrator can conveniently make a corresponding narration through a screen to help users to understand more easily
JP6400879B2 (en) Image display device and image display system
CN112732089A (en) Virtual reality equipment and quick interaction method
CN103873453A (en) Immersion communication client, immersion communication server and content view obtaining method
Ryskeldiev et al. Streamspace: Pervasive mixed reality telepresence for remote collaboration on mobile devices
US11900530B1 (en) Multi-user data presentation in AR/VR
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN113676690A (en) Method, device and storage medium for realizing video conference
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN116069974A (en) Virtual reality equipment and video playing method
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN114286077A (en) Virtual reality equipment and VR scene image display method
CN114327033A (en) Virtual reality equipment and media asset playing method
CN116126175A (en) Virtual reality equipment and video content display method
CN116132656A (en) Virtual reality equipment and video comment display method
CN116342838A (en) Headset and map creation initialization method in headset
CN116339499A (en) Headset and plane detection method in headset
CN116149517A (en) Virtual reality equipment and interaction method of virtual user interface
CN114283055A (en) Virtual reality equipment and picture display method
CN116225205A (en) Virtual reality equipment and content input method
TWI607371B (en) Hotspot build process approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination