CN116132656A - Virtual reality equipment and video comment display method - Google Patents

Virtual reality equipment and video comment display method Download PDF

Info

Publication number
CN116132656A
CN116132656A CN202111347689.1A CN202111347689A CN116132656A CN 116132656 A CN116132656 A CN 116132656A CN 202111347689 A CN202111347689 A CN 202111347689A CN 116132656 A CN116132656 A CN 116132656A
Authority
CN
China
Prior art keywords
content
comment
video
displayed
display control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111347689.1A
Other languages
Chinese (zh)
Inventor
温佳乐
吴金旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202111347689.1A priority Critical patent/CN116132656A/en
Publication of CN116132656A publication Critical patent/CN116132656A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a virtual reality device and a video comment display method, wherein video watched by a user through the virtual reality device is usually displayed on a webpage, and the webpage comprises content such as comments of the video except for a video display area. The virtual reality device can acquire comment content corresponding to the video when the video is not displayed in a full screen mode. When the video is displayed in a full screen mode, the virtual reality device independently displays comment content on another display interface, namely a 3D display control. When the video is not displayed in full screen, the 3D display control is positioned at the rear end of the video display interface, namely in a non-visible state. When the video is displayed in a full screen mode, the 3D display control is positioned at the front end of the video display interface, namely in a visible state, so that when the video is displayed in the full screen mode by the virtual reality equipment, a user can synchronously watch comment content, and the immersion of watching in the 3D scene is enhanced.

Description

Virtual reality equipment and video comment display method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a video comment display method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving an environmental immersion. A virtual reality device is a device that presents virtual pictures to a user using virtual reality technology. In virtual reality devices, VR browsers are an important network video playback tool. As long as the VR effect is supported by the video web page, the VR browser can display the video web page.
In some current virtual reality devices, when the VR browser is utilized to View the video content on the video web page in a full screen manner, due to the limitation of the production of the video web page, the video content can only be overlaid on the video web page through a full-sized View window, and the video content is displayed on the View window, which results in that other content on the video web page is overlaid and is not visible, such as comments under the video display area. Only in the case where the browser does not display the video content full screen, the user can view other user comments.
In fact, comments of other users on the video can bring unique experience to the user who is watching the video, and interestingness in watching the video is increased. The method is limited by the mode that the browser displays the video in the full screen in the current virtual reality device, and the user can not see related comments when watching the video content in the full screen, so that the immersion of watching the video in the 3D scene is greatly reduced.
Disclosure of Invention
The application provides virtual reality equipment and a video comment display method, which are used for solving the problem that related comments cannot be seen when video is watched in a full screen mode by using the virtual reality equipment at present.
In one aspect, the present application provides a virtual reality device, comprising: a display and a controller. Wherein the display is configured to display a virtual user interface. The controller is configured to: establishing a 3D display control, wherein the 3D display control is a display interface independent of a virtual user interface; when the video is not displayed in a full screen mode on the virtual user interface, controlling the 3D display control to be positioned at the rear end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface; when the video is displayed on the full screen of the virtual user interface, the 3D display control is controlled to be positioned at the front end of the virtual user interface, and comment contents are sequentially displayed on the 3D display control according to comment time of the comment contents.
Video viewed by a user using a virtual reality device in the present application is typically displayed on a web page, and the web page includes content such as comments of the video in addition to the video display area. The virtual reality device can acquire comment content corresponding to the video when the video is not displayed in a full screen mode. When the video is displayed in a full screen mode, the virtual reality device independently displays comment content on another display interface, namely a 3D display control. When the video is not displayed in full screen, the 3D display control is positioned at the rear end of the video display interface, namely in a non-visible state. When the video is displayed in a full screen mode, the 3D display control is positioned at the front end of the video display interface, namely in a visible state, so that when the video is displayed in the full screen mode by the virtual reality equipment, a user can synchronously watch comment content, and the immersion of watching in the 3D scene is enhanced.
In some implementations, the controller of the virtual reality device may be further configured to: determining whether the web pages including the video are completely loaded; if the web pages are completely loaded, acquiring hypertext markup language files of the web pages; extracting the content of the target label in the hypertext markup language file; the content of the target tag comprises a plurality of attributes corresponding to comment content of the video; and filtering the contents of the target tag layer by layer to obtain comment contents of the video.
In some implementations, the controller of the virtual reality device may be further configured to: acquiring content corresponding to the target attribute from the content of the target label; the content of the target attribute comprises all the content in the video comment area; traversing the content of the target attribute to determine all target objects; each target object comprises user information, comment time, comment content and sub comment content; and extracting the character string content in each target object to serve as comment content of the video.
In some implementations, the controller of the virtual reality device may be further configured to: when the 3D display control is positioned at the front end of the virtual user interface, determining whether the number of display lines required by the comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control; if the display line number exceeds the preset line number, firstly displaying the content with the preset line number in the comment content to be displayed on the 3D display control; after the preset time, redisplaying the rest content of the comment to be displayed on the 3D display control; after the next preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control; the comment time of the next comment content to be displayed is later than the comment time of the current comment content to be displayed.
In some implementations, the controller of the virtual reality device may be further configured to: if the number of the display lines does not exceed the preset number of the lines, displaying the comment content to be displayed on the 3D display control; after the preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control.
On the other hand, the application also provides a video comment display method which is applied to the virtual reality equipment, and the method comprises the following steps:
establishing a 3D display control; the 3D display control is a display interface independent of the virtual user interface;
when the video is not displayed in a full screen mode on the virtual user interface, controlling the 3D display control to be positioned at the rear end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface;
and when the video is displayed on the full screen of the virtual user interface, controlling the 3D display control to be positioned at the front end of the virtual user interface, and sequentially displaying the comment content on the 3D display control according to the comment time of the comment content.
Since the method of the second aspect of the present application may be applied to the virtual reality device of the first aspect, the beneficial effects of the second aspect are the same as those of the first aspect, and will not be described here again.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 illustrates a display system architecture diagram including a virtual reality device, according to some embodiments;
FIG. 2 illustrates a VR scene global interface schematic in accordance with some embodiments;
FIG. 3 illustrates a recommended content region schematic diagram of a global interface, according to some embodiments;
FIG. 4 illustrates an application shortcut entry area schematic for a global interface in accordance with some embodiments;
FIG. 5 illustrates a suspension diagram of a global interface, according to some embodiments;
FIG. 6 illustrates a schematic diagram of displaying a video web page on a content recommendation area, in accordance with some embodiments;
FIG. 7 illustrates a schematic diagram of a View window when displaying video in full screen in a virtual reality device, according to some embodiments;
FIG. 8 illustrates a flow chart of a method of displaying video comments in a virtual reality device according to some embodiments;
FIG. 9 illustrates a schematic diagram of a 3D display control, according to some embodiments;
FIG. 10 illustrates a schematic diagram of a 3D display control at a virtual user back end in accordance with some embodiments;
FIG. 11 illustrates a schematic diagram of a 3D display control at a virtual user front end in accordance with some embodiments;
FIG. 12 illustrates a schematic diagram of displaying video comments using a 3D display control in accordance with some embodiments;
FIG. 13 illustrates another schematic diagram of displaying video comments using a 3D display control according to some embodiments;
FIG. 14 illustrates a flow chart of a method of extracting comment content of a video in accordance with some embodiments;
FIG. 15 illustrates another method flow diagram for extracting comment content for a video in accordance with some embodiments;
FIG. 16 illustrates a schematic diagram of a 3D display control displaying a user name, in accordance with some embodiments;
FIG. 17 illustrates a schematic diagram of a 3D display control displaying comment time, according to some embodiments;
FIG. 18 illustrates a flow diagram of a method for a 3D display control to display comment content, in accordance with some embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
Reference throughout this specification to "multiple embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic shown or described in connection with one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In this embodiment, the virtual reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including, but not limited to, VR glasses, augmented reality devices (Augmented Reality, AR), VR gaming devices, mobile computing devices, and other wearable computers. In some embodiments of the present application, VR glasses are taken as an example to describe a technical solution, and it should be understood that the provided technical solution may be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or be connected to other intelligent display devices as an external device, where the display device may be an intelligent tv, a computer, a tablet computer, a server, etc.
The virtual reality device 500 may display a media asset screen after being worn on the face of the user, providing close range images for both eyes of the user to bring an immersive experience. To present the asset screen, the virtual reality device 500 may include a plurality of components for displaying the screen and face wear. Taking VR glasses as an example, the virtual reality device 500 may include components such as a housing, a position fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and the like. In practical applications, the optical system, the display assembly, the gesture detection circuit and the interface circuit may be disposed in the housing, so as to be used for presenting a specific display screen; the two sides of the shell are connected with position fixing pieces so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to processing elements such as a controller, and the processing elements can adjust specific picture contents in the display assembly according to the detected gesture data.
As shown in fig. 1, in some embodiments, the virtual reality device 500 may be connected to the display device 200, and a network-based display system is constructed between the virtual reality device 500, the display device 200, and the server 400, and data interaction may be performed in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific screen content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display device 200 may provide a broadcast receiving tv function, and may additionally provide an intelligent network tv function of a computer supporting function, including, but not limited to, a network tv, an intelligent tv, an Internet Protocol Tv (IPTV), etc.
The display device 200 and the virtual reality device 500 also communicate data with the server 400 via a variety of communication means. The display device 200 and the virtual reality device 500 may be allowed to communicate via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
In the course of data interaction, the user may operate the display device 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 by a direct wireless connection or by a non-direct connection. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner of bluetooth, infrared, etc. When transmitting the control instruction, the mobile terminal 300 and the remote controller 100 may directly transmit the control instruction data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display device 200 through a wireless router to establish indirect connection communication with the display device 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as handles in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. To present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because of the content of the screen observed by the left and right eyes of the user, a display screen with a strong stereoscopic impression can be observed when the display screen is worn.
The optical system in the virtual reality device 500 is an optical module composed of a plurality of lenses. The optical system is arranged between the eyes of the user and the display screen, and the optical path can be increased through the refraction of the optical signals by the lens and the polarization effect of the polaroid on the lens, so that the content presented by the display component can be clearly presented in the visual field of the user. Meanwhile, in order to adapt to the vision condition of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance among the lenses is changed, and therefore the optical path is changed, and the picture definition is adjusted.
The interface circuit of the virtual reality device 500 may be used to transfer interaction data, and besides transferring gesture data and displaying content data, in practical application, the virtual reality device 500 may also be connected to other display devices or peripheral devices through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so that a displayed screen is output to the display device in real time for display. For another example, the virtual reality device 500 may also be connected to a handle via interface circuitry, which may be operated by a user in a hand, to perform related operations in the VR user interface.
Wherein the VR user interface can be presented as a plurality of different types of UI layouts depending on user operation. For example, the user interface may include a global interface, such as the global UI shown in fig. 2 after the AR/VR terminal is started, which may be displayed on a display screen of the AR/VR terminal or may be displayed on a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut entry area 3, and a hover area 4.
The recommended content area 1 is used for configuring TAB columns of different classifications; media resources, themes and the like can be selectively configured in the columns; the media assets may include 2D movies, educational courses, travel, 3D, 360 degree panoramas, live broadcasts, 4K movies, program applications, games, travel, etc. services with media asset content, and the fields may select different template styles, may support simultaneous recommended programming of media assets and themes, as shown in fig. 3.
In some embodiments, the content recommendation area 1 may also include a main interface and a sub-interface. As shown in fig. 3, the portion located in the center of the UI layout is a main interface, and the portions located at both sides of the main interface are sub-interfaces. The main interface and the auxiliary interface can be used for respectively displaying different recommended contents. For example, according to the recommended type of the sheet source, the service of the 3D sheet source may be displayed on the main interface; and the left side sub-interface displays the business of the 2D film source, and the right side sub-interface displays the business of the full-scene film source.
Obviously, for the main interface and the auxiliary interface, different service contents can be displayed and simultaneously presented as different content layouts. And, the user can control the switching of the main interface and the auxiliary interface through specific interaction actions. For example, by controlling the focus mark to move left and right, the focus mark moves right when the focus mark is at the rightmost side of the main interface, the auxiliary interface at the right side can be controlled to be displayed at the central position of the UI layout, at this time, the main interface is switched to the service for displaying the full-view film source, and the auxiliary interface at the left side is switched to the service for displaying the 3D film source; and the right side sub-interface is switched to the service of displaying the 2D patch source.
In addition, in order to facilitate the user to watch, the main interface and the auxiliary interface can be displayed respectively through different display effects. For example, the transparency of the secondary interface can be improved, so that the secondary interface obtains a blurring effect, and the primary interface is highlighted. The auxiliary interface can be set as gray effect, the main interface is kept as color effect, and the main interface is highlighted.
In some embodiments, the top of the recommended content area 1 may also be provided with a status bar, in which a plurality of display controls may be provided, including time, network connection status, power, and other common options. The content included in the status bar may be user-defined, e.g., weather, user avatar, etc., may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on a time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be displayed directly as specific time text information and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining situation of the virtual reality device 500.
The status bar is used to enable the user to perform a common control operation, so as to implement quick setting of the virtual reality device 500. Since the setup procedure for the virtual reality device 500 includes a number of items, all of the commonly used setup options cannot generally be displayed in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion options are selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further provided in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "shortcut center" option may be set in the expansion window. After clicking the shortcut center option, the user may display a shortcut center window by the virtual reality device 500. The shortcut center window can comprise screen capturing, screen recording and screen throwing options for respectively waking up corresponding functions.
The traffic class extension area 2 supports extension classes that configure different classes. And if the new service type exists, supporting configuration independent TAB, and displaying the corresponding page content. The service classification in the service classification expansion area 2 can also be subjected to sequencing adjustment and offline service operation. In some embodiments, the service class extension area 2 may include content: movie, education, travel, application, my. In some embodiments, the traffic class extension area 2 is configured to show large traffic classes TAB and support more classes configured, the icon of which supports the configuration as shown in fig. 3.
The application shortcut entry area 3 may specify that pre-installed applications, which may be specified as a plurality, are displayed in front for operational recommendation, supporting configuration of special icon styles to replace default icons. In some embodiments, the application shortcut entry area 3 further includes a left-hand movement control, a right-hand movement control for moving the options target, for selecting different icons, as shown in fig. 4.
The hover region 4 may be configured to be above the left diagonal side, or above the right diagonal side of the fixation region, may be configured as an alternate character, or may be configured as a jump link. For example, the suspension jumps to an application or displays a designated function page after receiving a confirmation operation, as shown in fig. 5. In some embodiments, the suspension may also be configured without jump links, purely for visual presentation.
In some embodiments, the global UI further includes a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the handheld controller selects the icon, the icon displays a text prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after selecting the search icon, the search icon will display the text "search" and the original icon, and after further clicking the icon or text, the search icon will jump to the search page; for another example, clicking on the favorites icon jumps to favorites TAB, clicking on the history icon defaults to locating the display history page, clicking on the search icon jumps to the global search page, clicking on the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral device, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a back button; the home key can realize the reset function by long-time pressing; volume up and down buttons; and the touch area can realize clicking, sliding and holding drag functions of the focus.
After the user operates on the VR user interface, the virtual reality device 500 may be controlled to display a certain video resource content. In the virtual reality device 500, a video asset is typically played with a VR browser, and the video asset is displayed on a web page of the VR browser, as shown in fig. 6, a web page including the video display area 5 may be displayed on the main interface in the content recommendation area 1. At other locations on the web page, video related recommended content and some comments or video introductory content may be displayed, e.g., recommended video is displayed on the right side of the video display area 5, some comments are displayed below the video display area 5, etc.
In the virtual reality device 500, the VR browser is an important network video playing tool. As long as the VR effect is supported by the web page, the VR browser can display the video on the web page. In some of the present virtual reality devices 500, when the VR browser is utilized to View the video content on the video web page in a full screen manner, due to the limitation of the production of the video web page, the video content can only be overlaid on the video web page through a full-sized View window, and the video content is displayed on the View window, which results in that other content on the video web page is overlaid and not visible, as shown in fig. 7, when the virtual reality device 500 displays the video in the video display area in a full screen manner in fig. 6, a new display interface, that is, the View window, needs to be overlaid on the video web page as shown in fig. 6, and the video content needs to be displayed on the View window, and at this time, all the content on the web page in fig. 6 is overlaid, and the user can only View the video content, but cannot View other content on the web page, such as comments under the video display area, and the like. And only if the browser is not displaying video content full screen, the user can view other user comments.
In fact, comments of other users on the video can bring unique experience to the user who is watching the video, and interestingness in watching the video is increased. But is limited by the manner of displaying the video in the full screen by the browser in the current virtual reality device 500, the user cannot see the relevant comments when watching the video content in full screen, so that the immersion of watching the video in the 3D scene is greatly reduced.
In order to solve the above-mentioned problems, a virtual reality device 500 is provided in an embodiment of the present application, which includes a display and a controller. Wherein the display may be configured to display a virtual user interface, i.e. the VR user interface described in the above embodiments. As shown in fig. 8, the controller may be configured to perform the steps of:
step S101, a 3D display control 6 is established.
The 3D display control 6 in the embodiment of the present application is a display interface independent of a virtual user interface, and specific content may be displayed on the 3D display control 6. The virtual reality device 500 may begin building the 3D display control 6 when the virtual user interface is not displaying video full screen.
Fig. 9 illustrates a schematic diagram of a 3D display control, according to some embodiments. As shown in fig. 9, the area of the 3D display control 6 is smaller than the virtual user interface, and when the 3D display control 6 is displayed at the front end of the virtual user interface, the content displayed on the 3D display control 6 will only cover a part of the position of the virtual user interface, and will not cover the virtual user interface completely.
Step S102, when the virtual user interface does not display the video in a full screen mode, controlling the 3D display control 6 to be positioned at the rear end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface.
When the virtual user interface does not display the video in full screen, since the recommended content, comments, and the like can be displayed on the web page containing the video at the same time, the user can directly watch the content such as the video comments on the web page, and the virtual reality device 500 does not need to display the comment content separately.
At this time, in order to avoid that the established 3D display control 6 affects the display of the web page content, the 3D display control 6 needs to be placed at the back end of the virtual user interface, as shown in fig. 10, when the 3D display control 6 is placed at the back end of the virtual user interface, the 3D display control 6 is not seen by the user when the user views the virtual user interface at the front end of the virtual user interface.
Generally, the virtual reality device 500 displays the web page with the video display area 5 before displaying the video in full screen, and after the user selects to display the video in full screen, the virtual reality device 500 displays the video content in full screen in response to the instruction of the user. In order to ensure that the user can timely know the content of the video comment when watching the video content in a full screen, the virtual reality device 500 can extract the comment content of the video on the webpage when displaying the webpage, so that the comment can be directly and independently displayed when displaying the video in a full screen.
When the 3D display control 6 is placed at the rear end of the virtual user interface, the virtual reality device 500 may display the extracted comment content on the 3D display control 6, or may not display the comment content, because the 3D display control 6 is in an invisible state at this time, and even if the comment content is displayed, the user cannot see the comment content.
Step S103, when the virtual user interface displays the video in a full screen mode, controlling the 3D display control 6 to be positioned at the front end of the virtual user interface, and sequentially displaying comment contents on the 3D display control 6 according to comment time of the comment contents.
As shown in fig. 11, the 3D display control 6 is in a visible state when placed in front of the virtual user interface, and the user sees the content on the 3D display control 6 when viewing the virtual user interface in front of the virtual user interface.
Because the virtual reality device 500 has already extracted comment content of the video when the video is not displayed in full screen, and further, when the video is displayed in full screen, the comment content can be directly displayed on the 3D display control 6, at this time, the user can not only watch the video content in full screen, but also watch related comments synchronously.
FIG. 12 illustrates a schematic diagram of displaying video comments using a 3D display control according to some embodiments. As shown in fig. 12, the 3D display control 6 may be disposed at a middle lower portion of the front end of the virtual user interface, may be disposed at a middle upper portion of the front end of the virtual user interface, or may be disposed at another position of the front end of the virtual user interface.
The comment content displayed on the 3D display control 6, for example, "the whole universe is in a scalding, dense state" may be displayed in one line or in multiple lines in the 3D display control 6, which depends on the width of the 3D display control 6. If the width of the 3D display control 6 requires that a row can display 10 characters, the comment is shown in fig. 12, which needs to occupy two rows on the 3D display control 6 for display, whereas if the width of the 3D display control 6 requires that a row can display 20 characters, the comment is shown in fig. 13, which needs to occupy only one row on the 3D display control 6 for display.
In this embodiment, when the 3D display control 6 is established, the preset width of the 3D display control 6 and the preset number of lines that can display comment content are determined, and then the extracted comment content can be displayed according to the preset number of lines that are preset by the preset width.
In addition, in order to avoid the shielding of the 3D display control 6 on the video content, in the embodiment of the present application, except for the comment content displayed on the 3D display control 6, other parts on the 3D display control 6 are all in a transparent state, for example, the 3D display control 6 may display three lines of comment content, but in reality, the comment content to be displayed occupies only two lines, so that the remaining line area on the 3D display control 6 is displayed in a transparent state, and the user can only see two lines of comment content when watching the video in a full screen. The comment content displayed in this way is similar to the effect of an aerial barrage, and the comment content can be synchronously displayed for a user without shielding the picture content of the video.
When a user uses video content watched by the virtual reality device 500 in the embodiment of the present application, the virtual reality device 500 may obtain comment content corresponding to the video when the video is not displayed in full screen. When the video is displayed in full screen, the virtual reality device 500 then displays the comment content alone on another display interface, namely, the 3D display control 6. When the video is not displayed full screen, the 3D display control 6 is at the back end of the virtual user interface, i.e. in a non-visible state. When the video is displayed in a full screen mode, the 3D display control 6 is positioned at the front end of the virtual user interface, namely in a visible state, so that when the video is displayed in the full screen mode by the virtual reality device 500, a user can synchronously watch comment content, and the immersion of watching in a 3D scene is enhanced.
In general, the web page displayed in the virtual reality device 500 is the same as the web page displayed in the display device such as a television, and has a corresponding hypertext markup language (Hyper Text Markup Language, HTML) file, the hypertext markup language defines the meaning and structure of the web page content, and the hypertext markup language defines its own grammar rules for representing the richer meaning than "text", such as pictures, tables, links, and the like. In brief, the language of HTML is to Tag text with tags (tags) that indicate the meaning of the text.
The virtual reality device 500 displays a web page or video on the virtual user interface through the browser, and the browser knows the syntax of the HTML language, so that the virtual reality device 500 can view the HTML file through the browser. Typically, after the virtual reality device 500 loads a web page, a complete HTML file corresponding to the web page may be obtained. After extracting the target tag content from the HTML file, the virtual reality device 500 may extract the comment content from the target tag content.
In the above procedure, as shown in fig. 14, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S201, it is determined whether the web page including the video is completely loaded.
In step S202, if the web page is completely loaded, the hypertext markup language file of the web page, that is, the HTML file, is obtained.
Step S203, extracting the content of the target label in the hypertext markup language file.
In an HTML file, several tags are included, e.g., < HTML > tag, < head > tag, < body > tag, etc., and the HTML file may include different content, such as text, links, pictures, lists, tables, forms, frames, etc., by different tags. The content in the video comment area on the web page is generally present in the < div > tag, and then the content in the < div > tag, that is, the content of the target tag, needs to be extracted first before the comment content needs to be extracted.
In the < div > tag, several attributes corresponding to the comment content, such as a comment, and the like, are included. And, further, the contents of the video comment field are mostly concentrated in the attribute element contents of which < div > tag internal id is identified as a comment or a comment.
Step S204, filtering the contents of the target labels layer by layer to obtain comment contents of the video.
After step S203, the obtained target tag content has attribute elements related to comment content, and in step S204, the attribute elements with id of the comment or the comments may be further extracted from the target tag content, so as to obtain the comment content in the comment or the comments.
Upon extracting a comment or a comment attribute element, the virtual reality device 500 can use the JavaScript interface for retrieval.
The comment or the comment attribute element is also provided with a class object sub-node comment-list, and a div object is included in the comment-list sub-node, wherein user, text, info and reply-box objects respectively exist in each div object, and the comment information, the comment time, the comment content and the sub-comment content under the comment are respectively represented by the user information participating in the comment. The reply-box object needs to extract the user, text and info information from the sub-objects contained in the reply-box object in a recursive mode.
After the div object is found, the content of the character string in the div object can be extracted by using a regular filtering mode and the like. The content of the character string in the div object is the comment content of the text part in the video comment area.
In the above procedure, as shown in fig. 15, the controller of the virtual reality device 500 may be further configured to perform the steps of:
step S301, obtaining the content corresponding to the target attribute from the content of the target label.
The target attribute may refer to the above-mentioned command or the command, and the content of the target attribute includes all the content in the video comment area.
Step S302, traversing the contents of the target attributes to determine all target objects.
Specifically, the contents of the target attribute need to be traversed, the class object child node comment-list is found, and then the target object is filtered out from the comment-list. The target object can be referred to as div object.
If there are several related comments of the video on the web page, then the comment-list also includes several target objects, and each target object corresponds to a comment. One comment typically has user information, comment time, comment content, sub-comment content, and the like, and then each target object also has corresponding objects representing the user information, comment time, comment content, and sub-comment content.
In step S303, the content of the character string in each target object is extracted as comment content of the video.
In the target object, the specific comment content usually exists in the form of a character string, and in step S303, the comment content desired to be displayed can be obtained only by extracting the target character string content in the target object according to the corresponding regular expression.
In some embodiments, in order to enrich the content displayed on the 3D display control 6, the target string content in the target object may be extracted from the target correspondence according to the corresponding canonical representation, so as to increase the user information of the comment content, and enable the user watching the video to clearly know which user writes the comment. For example, the 3D display control 6 shown in fig. 16, in which not only specific comment content "everything starts from a large explosion" but also a user name "Zhang san" of posting a comment is displayed before the comment content.
Or, in order to enrich the content displayed on the 3D display control 6, the target string content in the target object can be extracted from the target correspondence according to the corresponding regular expression, so as to increase the comment time of the comment content, and enable the user watching the video to clearly know when other users write the comment. For example, the 3D display control 6 shown in fig. 17, in which not only the specific comment content "everything starts from a large explosion" and the user name "Zhang Sano" but also the comment time "8 months 15 days in 2009" is displayed on a new line.
It should be noted that, in the embodiment of the present application, different character string contents, such as user information, comment content, comment time, etc., are extracted, and the required regular expressions are different.
In the 3D display control 6 of the embodiment of the present application, comment contents may be sequentially scroll-displayed according to the posting time of the comment contents. And, the time interval of scrolling is preset, for example, the preset time is 3s, and then the 3D display control 6 needs to display the second comment content after 3s of displaying the first comment content.
In some cases, the content of one comment may be too much, and the comment cannot be displayed completely on one page of the 3D display control 6, for example, the preset number of lines of the 3D display control 6 is 3 lines, but the comment content needs to occupy 5 lines for being displayed on the 3D display control 6. In this case, 3 lines of comment contents need to be displayed on the 3D display control 6 first, and after a preset time, the remaining two lines of comment contents need to be displayed on the 3D display control 6. It should be noted that after the preset time, the 3 lines of content previously displayed on the 3D display control 6 are no longer displayed.
In the above-described process of displaying comment content, as shown in fig. 18, the controller of the virtual reality device 500 may be further configured to perform the steps of:
In step S401, when the 3D display control 6 is at the front end of the virtual user interface, it is determined whether the number of display lines required for displaying the comment content to be displayed exceeds the preset number of lines on the 3D display control 6, where the comment content can be displayed.
In step S402, if the number of display lines exceeds the preset number of lines, the content with the preset number of lines in the comment content to be displayed is displayed on the 3D display control 6.
Step S403, after the preset time, redisplaying the rest of the comments to be displayed on the 3D display control 6.
And step S404, if the number of the display lines does not exceed the preset number of the lines, displaying comment contents to be displayed on the 3D display control 6.
Step S405, after the next preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control 6.
In the embodiment of the application, each comment has a corresponding comment time, wherein the comment time refers to the time when a user issues comment content, and the comment time can be in the form of XX, month and XX, or in particular to XX, XX minutes, XX seconds and the like. The comment time is not specifically limited in the embodiment of the present application.
The posting time sequence of each comment content can be determined according to the comment time, and the 3D display control is usually displayed from the earliest comment content.
In some embodiments, the virtual reality device 500 may further store comment content in a sequential arrangement according to the early and late of the respective comment time, where the comment time is earlier for the current comment content to be displayed than for the next comment content to be displayed later.
In some embodiments, in controlling whether the 3D display control 6 is at the front end or the rear end of the virtual user interface, a rectangular control coordinate system may also be established in the rendering scene of the virtual reality device 500, so that the z-axis of the coordinate system between the spaces is perpendicular to the virtual user interface and extends toward the user direction. Thus, when the 3D display control 6 is controlled to be positioned at the front end of the virtual user interface, the z-axis coordinate of the 3D display control 6 can be adjusted to be smaller than that of the virtual user interface; and when the 3D display control 6 is controlled to be positioned at the rear end of the virtual user interface, the z-axis coordinate of the 3D display control 6 can be adjusted to be larger than that of the virtual user interface.
When a user uses the virtual reality device 500 in the embodiment of the present application, the virtual reality device 500 may obtain comment content corresponding to a video when the video is not displayed in full screen. When the video is displayed in full screen, the virtual reality device 500 then displays the comment content alone on another display interface, namely, the 3D display control 6. When the video is not displayed in full screen, the 3D display control 6 is at the rear end of the video display interface, i.e. in a non-visible state. When the video is displayed in a full screen mode, the 3D display control 6 is positioned at the front end of the video display interface, namely in a visible state, so that when the video is displayed in the full screen mode by the virtual reality device 500, a user can synchronously watch comment content, and the immersion of watching in a 3D scene is enhanced.
It should be noted that, the rendering scene in the embodiment of the present application refers to one virtual scene constructed by the rendering engine of the virtual reality device 500 through a rendering program. For example, the virtual reality device 500 based on the units 3D rendering engine may construct a unit 3D scene when rendering a display. In a unit 3D scene, various virtual objects and functionality controls may be added to render a particular usage scene. For example, when playing multimedia resources, a display panel may be added in the unit 3D scene, where the display panel is used to present the multimedia resource picture. Meanwhile, virtual object models such as seats, sound equipment, people and the like can be added in the units 3D scene, so that cinema effect is created.
In order to solve the problem that the user cannot see the relevant comments when watching the video in full screen by using the virtual reality device 500 at present, the embodiment of the present application further provides a video comment display method, which can be applied to the virtual reality device 500 in the foregoing embodiment and is specifically implemented by a controller in the virtual reality device 500. The method may comprise the steps of:
step S101, a 3D display control 6 is established; the 3D display control 6 is a display interface independent of the virtual user interface.
Step S102, when the virtual user interface does not display the video in full screen, controlling the 3D display control 6 to be at the back end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface.
Step S103, when the video is displayed on the full screen of the virtual user interface, controlling the 3D display control 6 to be at the front end of the virtual user interface, and sequentially displaying the comment content on the 3D display control 6 according to the comment time of the comment content.
According to the video comment display method provided by the embodiment, the 3D display control 6 can be established when the video is not displayed in the full screen of the virtual user interface, the 3D display control 6 is controlled to be positioned at the rear end of the virtual user interface, and comment content corresponding to the video is extracted when the video is not displayed in the full screen of the virtual user interface. And when the video is displayed on the full screen of the virtual user interface, the 3D display control 6 is controlled to be positioned at the front end of the virtual user interface, and the comment content extracted before is displayed on the 3D display control 6, so that a user can watch related comment content while watching the full screen video, and the immersion of watching in the 3D scene is enhanced.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to display a virtual user interface;
a controller configured to:
establishing a 3D display control; the 3D display control is a display interface independent of the virtual user interface;
when the video is not displayed in a full screen mode on the virtual user interface, controlling the 3D display control to be positioned at the rear end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface;
and when the video is displayed on the full screen of the virtual user interface, controlling the 3D display control to be positioned at the front end of the virtual user interface, and sequentially displaying the comment content on the 3D display control according to the comment time of the comment content.
2. The virtual reality device of claim 1, wherein the controller is further configured to:
determining whether the web pages including the video are completely loaded;
if the web pages are completely loaded, acquiring hypertext markup language files of the web pages;
extracting the content of the target label in the hypertext markup language file; the content of the target tag comprises a plurality of attributes corresponding to comment content of the video;
and filtering the contents of the target tag layer by layer to obtain comment contents of the video.
3. The virtual reality device of claim 2, wherein the controller is further configured to:
acquiring content corresponding to the target attribute from the content of the target label; the content of the target attribute comprises all the content in the video comment area;
traversing the content of the target attribute to determine all target objects; each target object comprises user information, comment time, comment content and sub comment content;
and extracting the character string content in each target object to serve as comment content of the video.
4. A virtual reality device according to any one of claims 1-3, wherein the controller is further configured to:
When the 3D display control is positioned at the front end of the virtual user interface, determining whether the number of display lines required by the comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control;
if the display line number exceeds the preset line number, firstly displaying the content with the preset line number in the comment content to be displayed on the 3D display control;
after the preset time, redisplaying the rest content of the comment to be displayed on the 3D display control;
after the next preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control; the comment time of the next comment content to be displayed is later than the comment time of the current comment content to be displayed.
5. The virtual reality device of claim 4, wherein the controller is further configured to:
if the number of the display lines does not exceed the preset number of the lines, displaying the comment content to be displayed on the 3D display control;
after the preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control.
6. A video comment display method, the method comprising:
establishing a 3D display control; the 3D display control is a display interface independent of the virtual user interface;
when the video is not displayed in a full screen mode on the virtual user interface, controlling the 3D display control to be positioned at the rear end of the virtual user interface, and extracting comment content corresponding to the video on the virtual user interface;
and when the video is displayed on the full screen of the virtual user interface, controlling the 3D display control to be positioned at the front end of the virtual user interface, and sequentially displaying the comment content on the 3D display control according to the comment time of the comment content.
7. The method of claim 6, wherein the step of extracting comment content corresponding to the video on the virtual user interface comprises:
determining whether the web pages including the video are completely loaded;
if the web pages are completely loaded, acquiring hypertext markup language files of the web pages;
extracting the content of the target label in the hypertext markup language file; the content of the target tag comprises a plurality of attributes corresponding to comment content of the video;
And filtering the contents of the target tag layer by layer to obtain comment contents of the video.
8. The method of claim 7, wherein the step of filtering the contents of the target tag layer by layer to obtain comment contents of the video comprises:
acquiring content corresponding to the target attribute from the content of the target label; the content of the target attribute comprises all the content in the video comment area;
traversing the content of the target attribute to determine all target objects; each target object comprises user information, comment time, comment content and sub comment content;
and extracting the character string content in each target object to serve as comment content of the video.
9. The method according to any one of claims 6-8, further comprising:
when the 3D display control is positioned at the front end of the virtual user interface, determining whether the number of display lines required by the comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control;
if the display line number exceeds the preset line number, firstly displaying the content with the preset line number in the comment content to be displayed on the 3D display control;
After the preset time, redisplaying the rest content of the comment to be displayed on the 3D display control;
after the next preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control; the comment time of the next comment content to be displayed is later than the comment time of the current comment content to be displayed.
10. The method according to claim 9, wherein the method further comprises:
if the number of the display lines does not exceed the preset number of the lines, displaying the comment content to be displayed on the 3D display control;
after the preset time, continuously determining whether the number of display lines required by the next comment content to be displayed exceeds the preset number of lines capable of displaying the comment content on the 3D display control.
CN202111347689.1A 2021-11-15 2021-11-15 Virtual reality equipment and video comment display method Pending CN116132656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111347689.1A CN116132656A (en) 2021-11-15 2021-11-15 Virtual reality equipment and video comment display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111347689.1A CN116132656A (en) 2021-11-15 2021-11-15 Virtual reality equipment and video comment display method

Publications (1)

Publication Number Publication Date
CN116132656A true CN116132656A (en) 2023-05-16

Family

ID=86293737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111347689.1A Pending CN116132656A (en) 2021-11-15 2021-11-15 Virtual reality equipment and video comment display method

Country Status (1)

Country Link
CN (1) CN116132656A (en)

Similar Documents

Publication Publication Date Title
US8601510B2 (en) User interface for interactive digital television
CN108632633B (en) Live webcast data processing method and device
CN110636353A (en) Display device
CN108632632B (en) Live webcast data processing method and device
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
TW201103325A (en) Method and system for presenting content
CN108635863B (en) Live webcast data processing method and device
CN112073798B (en) Data transmission method and equipment
WO2021088888A1 (en) Focus switching method, and display device and system
CN112732089A (en) Virtual reality equipment and quick interaction method
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN105306872B (en) Control the methods, devices and systems of multipoint videoconference
CN116260999A (en) Display device and video communication data processing method
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
WO2022083554A1 (en) User interface layout and interaction method, and three-dimensional display device
CN116132656A (en) Virtual reality equipment and video comment display method
CN114327033A (en) Virtual reality equipment and media asset playing method
CN114846808B (en) Content distribution system, content distribution method, and storage medium
WO2020248682A1 (en) Display device and virtual scene generation method
CN114286077A (en) Virtual reality equipment and VR scene image display method
CN116069974A (en) Virtual reality equipment and video playing method
CN116126175A (en) Virtual reality equipment and video content display method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
CN116225205A (en) Virtual reality equipment and content input method
CN116540905A (en) Virtual reality equipment and focus operation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination