CN115129280A - Virtual reality equipment and screen-casting media asset playing method - Google Patents

Virtual reality equipment and screen-casting media asset playing method Download PDF

Info

Publication number
CN115129280A
CN115129280A CN202110325049.4A CN202110325049A CN115129280A CN 115129280 A CN115129280 A CN 115129280A CN 202110325049 A CN202110325049 A CN 202110325049A CN 115129280 A CN115129280 A CN 115129280A
Authority
CN
China
Prior art keywords
screen
mode
display
virtual reality
casting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110325049.4A
Other languages
Chinese (zh)
Inventor
曹月静
孟亚州
姜璐珩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110325049.4A priority Critical patent/CN115129280A/en
Priority to PCT/CN2021/137059 priority patent/WO2022151882A1/en
Publication of CN115129280A publication Critical patent/CN115129280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application provides virtual reality equipment and a screen-casting media asset playing method, and the method can receive screen-casting data after receiving a control instruction of a user, and extract screen-casting media asset information from the screen-casting data. And then screen casting media asset information is used for acquiring film source addresses in different playing modes in the server, so that media asset data in the appointed playing mode are acquired by accessing the film source addresses, and screen casting pictures are played. The method can acquire the media asset data in the appointed playing mode while displaying the screen projection data so as to play the media asset content projected by the intelligent terminal in a 3D or panoramic mode, and solves the problems that the playing mode is single and the immersive effect is not beneficial to obtaining.

Description

Virtual reality equipment and screen-casting media asset playing method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a screen-casting media asset playing method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving a person a sense of environmental immersion. A virtual reality device is a device that uses virtual display technology to present a virtual picture to a user. Generally, a virtual reality device includes two display screens for presenting virtual picture contents, corresponding to left and right eyes of a user, respectively. When the contents displayed by the two display screens are respectively from the images of the same object from different visual angles, the stereoscopic viewing experience can be brought to the user.
The virtual reality device can be used as a screen projection receiving end (sink) to play a screen projection picture sent by an intelligent terminal (source). In the screen projection process, the intelligent terminal can send screen projection pictures to the virtual reality equipment through a screen projection protocol, and the virtual reality equipment renders the screen projection pictures, so that picture contents are displayed in a specific interface.
The virtual reality equipment can play multimedia resources of various film source types, for example, 2D film sources, 3D film sources, panoramic film sources and the like, but a screen projection picture sent by an intelligent terminal such as a mobile phone and the like is usually presented in a 2D form, so the virtual reality equipment can only present the screen projection picture in the 2D form, which is not only unfavorable for rendering to obtain an immersive effect, but also easily causes a single viewing form, and reduces user experience.
Disclosure of Invention
The application provides virtual reality equipment and a screen-casting media asset playing method, and aims to solve the problems that a traditional playing method is single in playing mode and not beneficial to obtaining an immersion effect.
In one aspect, the present application provides a virtual reality device, comprising: a display, a communicator, and a controller, wherein the display is configured to display a playback interface and other user interfaces; the communicator is configured to establish a screen-casting connection with the intelligent terminal; the controller is configured to perform the following program steps:
acquiring a control instruction which is input by a user and used for establishing screen projection connection;
responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media asset information;
acquiring a film source address corresponding to at least one play mode from a server according to the screen casting media asset information;
and accessing the film source address to control the display to display the media asset picture corresponding to the film source address.
On the other hand, the application also provides a screen casting media asset playing method which is applied to the virtual reality equipment and comprises the following steps:
acquiring a control instruction which is input by a user and used for establishing screen projection connection;
responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media resource information;
acquiring a film source address corresponding to at least one play mode from a server according to the screen casting media asset information;
and accessing the film source address to control the display to display a media resource picture corresponding to the film source address.
According to the technical scheme, the virtual reality equipment and the screen casting media asset playing method can receive screen casting data after receiving the control instruction of the user, and extract screen casting media asset information from the screen casting data. And then screen casting media asset information is used for acquiring film source addresses in different playing modes in the server, so that media asset data in the appointed playing mode are acquired by accessing the film source addresses, and screen casting pictures are played. The method can acquire the media asset data in the appointed playing mode while displaying the screen projection data so as to play the media asset content projected by the intelligent terminal in a 3D or panoramic mode, and solves the problems that the playing mode is single and the immersive effect is not beneficial to obtaining.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments are briefly described below, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic illustration of a display system including a virtual reality device in some exemplary embodiments of the present application;
FIG. 2 is a schematic view of a VR scene global interface in some exemplary embodiments of the present application;
FIG. 3 is a schematic diagram of a recommended content area of a global interface in some exemplary embodiments of the present application;
FIG. 4 is a schematic diagram illustrating an application shortcut operation entry area of a global interface in accordance with some exemplary embodiments of the present application;
FIG. 5 is a diagram illustrating a suspension of a global interface in some exemplary embodiments of the present application;
FIG. 6 is a schematic illustration of a playback interface in some exemplary embodiments of the present application;
FIG. 7 is a schematic illustration of a partition of a playing interface region in some exemplary embodiments of the present application;
FIG. 8 is a schematic view of a mode switching interface in some exemplary embodiments of the present application;
fig. 9 is a schematic flowchart of a screen casting media asset playing method according to some exemplary embodiments of the present application;
FIG. 10 is a flowchart illustrating screen projection interaction in some exemplary embodiments of the present application;
fig. 11 is a flowchart illustrating playing a screen shot according to a play mode in some exemplary embodiments of the present application;
FIG. 12 is a schematic flow chart illustrating a process for playing a screen shot based on a rendered scene in some exemplary embodiments of the present application;
FIG. 13 is a schematic flow chart illustrating creation of a database using the MyBatis framework in some exemplary embodiments of the present application;
FIG. 14 is a schematic illustration of an interaction flow based on a switch interface in some exemplary embodiments of the present application;
FIG. 15 is a schematic diagram of switching windows when a 3D film source is detected in some exemplary embodiments of the present application;
FIG. 16 is a schematic diagram of a switching window when 360 film sources are detected in some exemplary embodiments of the present application;
FIG. 17 is a schematic illustration of a switching window with a preview screen in some exemplary embodiments of the present application;
fig. 18 is a flowchart illustrating a screen casting media asset playing method according to some exemplary embodiments of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive step, are within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and claims of this application and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can, for example, be implemented in sequences other than those illustrated or otherwise described herein with reference to the embodiments of the application.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term module, as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some exemplary implementations," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In some exemplary embodiments of the present application, the virtual Reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including but not limited to VR glasses, Augmented Reality (AR), VR gaming devices, mobile computing devices, and other wearable computers. The technical solutions of the embodiments of the present application are described by taking VR glasses as an example, and it should be understood that the provided technical solutions can be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or may be connected to other intelligent display devices as an external device, where the display devices may be smart televisions, computers, tablet computers, servers, and the like.
The virtual reality device 500 may be worn behind the face of the user, and display a media image to provide close-range images for the eyes of the user, so as to provide an immersive experience. To present the assets display, virtual reality device 500 can include a number of components for displaying the display and facial wear. Taking VR glasses as an example, the virtual reality apparatus 500 may include a housing, a position fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and other components. In practical application, the optical system, the display component, the posture detection circuit and the interface circuit can be arranged in the shell to present a specific display picture; the two sides of the shell are connected with the position fixing pieces so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to a processing element such as a controller, and the processing element can adjust specific picture content in the display assembly according to the detected gesture data.
As shown in fig. 1, in some exemplary embodiments, the virtual reality device 500 may access the display device 200, and construct a network-based display system with the server 400, so that data interaction may be performed among the virtual reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific screen content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, or a projection display device, among others. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the virtual reality device 500 also perform data communication with the server 400 through various communication methods. The display device 200 and the virtual reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some exemplary embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display apparatus 200 through a direct connection manner of bluetooth, infrared, or the like. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may directly transmit the control command data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some exemplary embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to interact with the virtual reality device 500 directly, for example, the mobile terminal 300 and the remote controller 100 may be used as a handle in a virtual reality scene to implement functions such as somatosensory interaction.
In some example embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. In order to present a specific picture, and to bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because the user can observe the picture content by the left and right eyes, the user can observe a display picture with strong stereoscopic impression when wearing the glasses.
The optical system in the virtual reality device 500 is an optical module consisting of a plurality of lenses. The optical system is arranged between the eyes of a user and the display screen, and can increase the optical path through the refraction of the lens on the optical signal and the polarization effect of the polaroid on the lens, so that the content displayed by the display assembly can be clearly displayed in the visual field range of the user. Meanwhile, in order to adapt to the eyesight of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance between the lenses is changed, the optical path is changed, and the definition of a picture is adjusted.
The interface circuit of the virtual reality device 500 may be configured to transmit interactive data, and in addition to the above-mentioned transmission of the gesture data and the display content data, in practical applications, the virtual reality device 500 may further connect to other display devices or peripherals through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output a displayed screen to the display device in real time for display. As another example, the virtual reality device 500 may also be connected to a handle via an interface circuit, and the handle may be operated by a user's hand to perform related operations in the VR user interface.
Wherein the VR user interface may be presented as a plurality of different types of UI layouts according to user operations. For example, the user interface may include a global UI, as shown in fig. 2, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include services with media asset contents such as 2D movies, education courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program application, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of the media assets and the titles, as shown in FIG. 3.
In some exemplary embodiments, the content recommendation area 1 may further include a main interface and a sub interface. As shown in fig. 3, the portion located at the center of the UI layout is the main interface, and the portions located at the both sides of the main interface are the sub interfaces. The main interface and the sub interface may be used to respectively present different recommended contents. For example, according to the recommended film source type, the service of the 3D film source can be displayed on the main interface; and the service of the 2D film source is displayed in the left sub interface, and the service of the panoramic film source is displayed in the right sub interface.
Obviously, the main interface and the sub interface can present different content layouts while displaying different service contents. And, the user can switch the main interface and the sub interface through a specific interactive action control. For example, by controlling the focus mark to move left and right, the focus mark is moved right again when being at the rightmost side of the main interface, and the sub-interface at the right side can be controlled to be displayed at the central position of the UI layout, at this time, the main interface is switched to display the service of the panoramic film source, and the sub-interface at the left side is switched to display the service of the 3D film source; and the right side sub-interface switches to display the services of the 2D film source.
In addition, in order to facilitate the user to watch, the main interface and the auxiliary interface can be respectively displayed through different display effects. For example, the transparency of the sub-interface may be increased to obtain a blurring effect for the sub-interface, thereby highlighting the main interface. The secondary interface can also be set to be in a gray effect, the main interface is kept in a color effect, and the main interface is highlighted.
In some exemplary embodiments, a status bar may be further disposed at the top of the recommended content area 1, and a plurality of display controls may be disposed in the status bar, including common options such as time, network connection status, and power. The content included in the status bar may be customized by the user, for example, content such as weather, user avatar, etc. may be added. The content contained in the status bar may be selected by the user to perform a corresponding function. For example, when the user clicks on the time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of the specific item. For example, the time control may be directly displayed as specific time text information, and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining condition of the virtual reality device 500.
The status bar is used to enable the user to perform common control operations, enabling rapid setup of the virtual reality device 500. Since the setup program for the virtual reality device 500 includes many items, all commonly used setup options are typically not displayed in their entirety in the status bar. To this end, in some exemplary embodiments, an expansion option may also be provided in the status bar. After the expansion option is selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further set in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some exemplary embodiments, after the expansion option is selected, a "shortcut center" option may be set in the expansion window. After the user clicks the shortcut center option, the virtual reality device 500 may display a shortcut center window. The shortcut center window may include "screen capture", "screen recording", and "screen projection" options for waking up corresponding functions, respectively.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting to configure an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some exemplary embodiments, the service class extension area 2 may include the contents of: movie & TV, education, tourism, application, my. In some exemplary embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of the configuration, as shown in fig. 3.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some exemplary embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 4.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to an application or displays a designated function page after receiving the confirmation operation, as shown in fig. 5. In some exemplary embodiments, the suspended object may not be configured with jump links, and is used solely for image presentation.
In some exemplary embodiments, the global UI further includes a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some exemplary embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and dragging the focus.
The user may enter different scene interfaces through the global interface, for example, as shown in fig. 6 and 7, the user may enter the playing interface at the entry of the "playing interface" in the global interface, or start the playing interface by selecting any one of the assets in the global interface. In the playing interface, the virtual reality device 500 may create a 3D scene through the Unity 3D engine and render specific screen content in the 3D scene.
In the playing interface, a user can watch specific media asset content, and in order to obtain better viewing experience, different virtual scene controls can be further arranged in the playing interface so as to cooperate with the media asset content to present specific scenes or implement interaction. For example, in a playing interface, a panel control can be loaded in a Unity 3D scene to present picture content, and then be matched with other home virtual controls to simulate the effect of a movie theater screen.
The virtual reality device 500 may present the operation UI content in a playback interface. For example, a asset list UI control may be displayed in front of the display panel in the Unity 3D scene, a asset icon stored locally by the current virtual reality device 500 may be displayed in the asset list, or a network asset icon playable in the virtual reality device 500 may be displayed. The user can select any icon in the media asset list, the media asset data corresponding to the icon is played, and the selected media asset can be displayed in real time in the display panel.
The assets that can be displayed in the Unity 3D scene can be in various forms such as pictures, videos, and the like, and due to the display characteristics of the VR scene, the assets displayed in the Unity 3D scene include at least 2D pictures or videos, 3D pictures or videos, panoramic pictures or videos, and the like.
The 2D picture or video is a traditional picture or video file, and when displaying, the same image can be displayed in two display screens of the virtual reality device 500, and the 2D picture or video is collectively referred to as a 2D film source in this application; a 3D picture or video, i.e. a 3D film source, is produced by shooting the same object at different angles by at least two cameras, and can display different images in two displays of the virtual reality device 500, thereby realizing a stereoscopic effect; panoramic pictures or videos, i.e., panoramic film sources, are panoramic images obtained by means of panoramic cameras or special shooting means, and the pictures can be displayed in a manner of creating a display spherical surface in a Unity 3D scene to present a panoramic effect.
For a 3D film source, the film source can be further divided into a left-right type 3D film source, a top-bottom type 3D film source, and the like according to the picture arrangement mode of each frame of image in the film source. Each frame of image of the left and right 3D film source comprises a left part and a right part which are image pictures shot by a left eye camera and a right eye camera respectively. For the panoramic film source, the film source forms such as 360-degree panorama, 180-degree panorama and fisheye panorama can be further divided according to the image visual field of the panoramic film source, and the image synthesis modes of each frame of image in different panoramic film source forms are different. In order to present better stereoscopic effect, the panorama source may further include a true panorama type, a left-right panorama type, a top-bottom panorama type, and the like.
Because the media asset data which can be displayed in the playing interface comprises a plurality of film source types and different film source types need different image output modes, a UI control for playing control can be further arranged in the playing interface. For example, a UI control for play control may be disposed in front of the display panel, which is a floating interactive UI control, i.e., the display may be triggered according to a specific trigger action. As shown in fig. 8, in the UI control, a "mode switching" option may be included, and when the user clicks the "mode switching" option, a mode list may be displayed, including mode options of "2D mode", "3D mode", and the like. And after the user selects any mode option in the mode list, the virtual reality equipment can be controlled to play the media asset data in the media asset list according to the selected mode.
Similarly, when a user selects a certain media asset item in the list, the playing interface can play the media asset item, that is, a picture corresponding to the media asset item is displayed on the display panel. In the process of playing the media asset item, the user may also switch the playing mode by invoking the UI control for playing control and selecting any one of the mode options in the UI control interface, so that the virtual reality device 500 may play the selected media asset data according to the switched playing mode.
In order to accommodate different play modes, in some exemplary embodiments, a media asset item correspondingly contains multiple forms of data. For example, a portion of the asset items contain both asset data in 2D and asset data in 3D. Namely, one asset item corresponds to two asset files, wherein each frame image in one asset file only comprises one specific picture, and each frame image in the other asset file comprises two specific pictures of a left part and a right part (or an upper part and a lower part). For the media asset item, in the playing process, one media asset file can be correspondingly selected to be played according to different playing modes so as to obtain different effects.
The virtual reality device 500 may also establish a screen-casting connection with the intelligent terminal, so as to play video data sent in the intelligent terminal through the virtual reality device 500. The intelligent terminal establishing the screen-casting connection with the virtual reality device 500 may be a device having media asset playing and picture processing functions, including but not limited to a mobile phone, an intelligent television (display device 200), a computer, a tablet computer, and the like. After the screen projection connection relation is established, the intelligent terminal can send screen projection data to the virtual reality device 500, and after the virtual reality device 500 receives the screen projection data, the screen projection data can be rendered in a rendering scene, and a left eye image picture and a right eye image picture are output so as to be displayed in a display.
According to different screen projection modes, the screen projection data form sent to the virtual reality device 500 by the intelligent terminal is different. For example, for the virtual reality device 500 and the intelligent terminal that support a Digital Living Network Alliance (DLNA) protocol, after the screen-casting connection is established, the intelligent terminal may send a Uniform Resource Locator (URL) address of the currently played media Resource to the virtual reality device 500, so that the virtual reality device 500 obtains the media Resource data by accessing the URL address of the media Resource after receiving the screen-casting data. The intelligent terminal may also directly transmit a video data stream to the virtual reality device 500, that is, the intelligent terminal may transmit the played media asset file or the currently displayed picture to the virtual reality device 500 in the form of a video data stream. The virtual reality device 500 then performs rendering of the received video data stream and displays it on a display.
Since smart terminals such as mobile phones and the like can only present a display picture with a 2D effect, media asset data sent to the virtual reality device 500 through screen projection connection is also in a 2D film source form generally, and the virtual reality device 500 can support various film source forms such as 3D and panorama, so that in the screen projection process, not only is the problem of single film source present, but also the definition is low, and a better immersion effect cannot be obtained. Therefore, in order to adapt the screen projection process, in some embodiments of the present application, a virtual reality device 500 is provided, comprising: a display, a communicator, and a controller. The display comprises a left display and a right display and is used for presenting a playing interface and other user interfaces, and the communicator can realize data communication based on a wired/wireless network connection mode and is used for establishing screen projection connection with the intelligent terminal. As shown in fig. 9, the controller is then further configured to perform the following program steps:
s1: and acquiring a control instruction which is input by a user and used for establishing screen projection connection.
The virtual reality device 500 may receive various control commands input by a user in an actual application, and different control commands may implement different functions corresponding to different interaction actions. The control instruction for establishing the screen-projecting connection input by the user can be input through the virtual reality device 500 or can be input through the intelligent terminal.
For example, as shown in fig. 9, the user may control the virtual reality device 500 to access a designated wireless local area network and also access a cell phone to the wireless local area network. And then, the mobile phone executes interactive operation, and clicks a screen projection function button in the operation interface, at this time, the mobile phone presents a screen projection equipment list, and the user can select the current virtual reality equipment 500 in the screen projection equipment list, so that the screen projection connection relationship is established. Through the above interaction process, the intelligent terminal may send a control instruction for establishing a screen-casting connection function to the virtual reality device 500.
For the virtual reality device 500 supporting other interaction modes, the input of the control instruction can be completed through the supported input mode. For example, for the virtual reality device 500 or the intelligent terminal supporting the intelligent voice system, the intelligent terminal may also be controlled to establish a screen-casting connection relationship with the virtual reality device 500 by inputting voice contents such as "establish screen casting" and "i want to cast screen".
S2: and responding to the control instruction, and receiving screen projection data sent by the intelligent terminal.
After the screen projection connection relationship is established, the intelligent terminal can send screen projection data to the virtual reality device 500, and the virtual reality device 500 can receive the screen projection data sent by the intelligent terminal. Because the screen-casting data sent by different screen-casting connection modes are different, the virtual reality device 500 may adopt different data analysis modes for different screen-casting connection modes, so as to analyze the screen-casting media asset information from the screen-casting data. The screen casting media information refers to media asset data related information corresponding to the screen casting data, and comprises media asset names, film source types and other contents.
For example, a user performs an interactive operation at a mobile phone end to play movie resources online, the name of the movie resource is "a", and the type of the movie source is a 2D movie source because the movie resource is currently played through the mobile phone end. In the process of playing movie resources by the mobile phone, a user clicks a "screen projection" button on a playing interface, and selects the current virtual reality device 500 from a popped screen projection device list, so as to send screen projection data to the current virtual reality device 500, and the virtual reality device 500 can analyze that the name of the movie resource "a" and the type of the movie source are 2D movie sources through the screen projection data.
For different screen projection connection modes, the virtual reality device 500 may need to adopt different methods to analyze the screen projection resource information from the screen projection data. For example, for an intelligent terminal that transmits screen projection data by using a DLNA protocol, a URL address of a movie "a" may be analyzed from the screen projection data, the virtual reality device 500 accesses the URL address to obtain corresponding media asset information, and extracts content such as a movie name and a film source type from the media asset information.
S3: and acquiring a film source address corresponding to at least one play mode from a server according to the screen casting media asset information.
After the virtual reality device 500 analyzes the screen casting media information in the screen casting data, the screen casting media information can be used for matching in the database of the server, and when the matching hits any table entry in the database, the film source addresses in different playing modes can be acquired. For example, when the movie name "a" is obtained by parsing and the genre of the movie source is 2D, the movie name is used as an index value to match with other genres other than the 2D movie source in the database. Through matching, if it is determined that a 3D-format and/or panoramic-format film source still exists in the currently-screened movie "a", the media asset link address in the 3D-format or panoramic-format film source may be obtained from the database, so that the virtual reality device 500 may play the 3D-format or panoramic-format film source with the same film name.
Wherein, the database is a relation table constructed according to a specific mapping relation framework. The database can store the name and film source type corresponding to each media asset under the appointed media asset platform and the media asset data storage address corresponding to each film source type. The database may be maintained by the virtual reality device 500, for example, the virtual reality device 500 may construct a mapping relationship table by reading the media asset data in the specified media asset platform and the corresponding media asset data storage address, and add the mapping relationship table to the database according to the mapping relationship frame corresponding to the database. The database may also be maintained uniformly by the operator of the virtual reality device 500. For example, the operator reads the platform media asset data and the corresponding storage addresses of the film sources through the server, generates a mapping relation table, and adds the generated mapping relation table to the database. When the virtual reality device 500 accesses the asset selection interface, the database may be synchronized by the daemon.
In some exemplary embodiments, the database may also be maintained by the virtual reality device 500 in conjunction with the operator. For example, the virtual reality device 500 may store different film source data of some media assets in a local storage manner, so that the virtual reality device 500 may obtain a database maintained by an operator through a server, and when the virtual reality device 500 stores local media asset data, construct a mapping relationship table according to a mapping relationship frame corresponding to the database, store the mapping relationship table in the database, so that after screen casting media asset information is obtained, matching is completed through the database.
In some exemplary embodiments, the virtual reality device 500 may generate a film source acquisition request according to the screen casting media information; and then, sending a film source obtaining request to the cloud server, so that the cloud server feeds back a film source address in response to the film source obtaining request, that is, the virtual reality device 500 can receive the film source address adapted to the at least one play mode.
S4: and accessing the film source address to control the display to display the media asset picture corresponding to the film source address.
After matching and obtaining the film source addresses corresponding to different playing modes in the server, the virtual reality device 500 may obtain media asset data corresponding to the specified playing mode by accessing the obtained film source addresses, and form a specific display image after rendering according to the rendering mode corresponding to the media asset data.
For example, if the virtual reality device 500 obtains the movie "a" by matching and a movie source address of a 3D movie source type is recorded in the database of the server, the movie "a" in a 3D form can be obtained by accessing the movie source address, so that the virtual reality device 500 can play the movie "a" in the play mode of the 3D movie source.
As can be seen, in the above embodiment, the virtual reality device 500 may extract the screen casting media asset information from the screen casting data after receiving the control instruction of the user. And then screen casting media asset information is used for acquiring film source addresses in different playing modes in the server, so that media asset data in the appointed playing mode are acquired by accessing the film source addresses, and screen casting pictures are played. Through matching in the server, the virtual reality device 500 can play the media asset content of the screen cast by the intelligent terminal in a 3D or panoramic mode, so that the playing form of the screen cast process is richer, and the picture quality is improved.
It should be noted that, when the virtual reality device 500 matches a film source address through the database, not only can match a media asset data address different from a film source type corresponding to the screen projection data, that is, when the type of the film source corresponding to the screen projection data is a 2D film source, 3D film source media assets or panoramic film source media assets with the same name are matched from the database; the media asset source address with the same type as the film source corresponding to the screen projection data can be matched in the database, that is, the virtual reality device 500 can match the film source address of the 2D film source type in the database. By matching the same type of film source addresses, the virtual reality device 500 can obtain film sources with better image quality effect through the database, so as to improve the playing effect of the screen casting media assets. For example, a movie "a" shared by screen projection data is suitable for being played at a mobile phone terminal, so that a 2D type can be matched from a database in the screen projection process, but the movie "a" is suitable for being played by the virtual reality device 500, so as to obtain a better display effect.
In some exemplary embodiments, when the smart terminal and the virtual reality device 500 transmit the screen projection data based on the DLNA protocol, the virtual reality device 500 may perform matching using the URL address, the asset title, and the asset type in the screen projection data. That is, as shown in fig. 11, the step of obtaining, in the server, a film source address corresponding to at least one play mode by using the screen casting media asset information further includes:
s310: analyzing screen casting media asset information from the screen casting data;
s320: detecting a current play mode;
s330: if the media asset type is suitable for the current play mode, accessing the current media asset URL address;
s340: and if the media asset type is not suitable for the current playing mode, acquiring a film source address which is suitable for the current playing mode from a server according to the current media asset URL address and/or the media asset title.
Before the matching is performed, the virtual reality device 500 may analyze the screen-casting information from the screen-casting data, and since the screen-casting data is transmitted based on the DLNA protocol, the URL address, the asset title, and the asset type of the screen-casting asset may be analyzed from the screen-casting data. For example, a URL address of a movie "B" of "HTTP:// × × × ×", a corresponding media asset title of "B", and a media asset type of "2D movie source" may be extracted from the screen projection data.
After the screen casting media asset information is analyzed, the play mode of the current virtual reality device 500 can be detected. For example, by detecting an input/output manner of a current rendered scene and detecting an arrangement condition of display panels in the rendered scene, it may be determined that a current play mode is a 3D mode, that is, it is detected that the rendered scene includes two display panels respectively visible to a left display camera and a right display camera, which are respectively used for presenting a left eye image and a right eye image in each frame of picture.
The virtual reality device 500 may also compare the media asset type with the current play mode to determine whether the media asset source type in the screen projection data is suitable for the current play mode. When the type of the media asset is suitable for the current playing mode, the media asset data can be directly obtained according to the URL address specified in the screen projection data so as to play the screen projection picture. For example, if the asset type parsed from the screen casting data is a 2D film source and the current play mode is also a 2D mode, the URL address obtained by parsing may be directly accessed, so that the screen casting asset is played in the 2D mode.
When the media asset type does not adapt to the current play mode, the film source address corresponding to the current play mode may be queried from the server, that is, the virtual reality device 500 may use the current media asset URL address and/or the media asset title to match the film source address adapting to the current play mode in the server. For example, if the type of the media assets parsed from the screen projection data is a 2D film source, and the current play mode is a 3D mode, then matching needs to be performed in the server using the current media asset URL address "HTTP:// ×" or media asset title "B" to query the 3D film source of movie B.
Different rendering modes are required to be performed on the media asset data due to different playing modes. For example, when the current play mode is the 3D mode, the virtual reality device 500 may segment each frame of image, and input the left-eye image part and the right-eye image part to the display panel in the rendering scene respectively for display; when the current play mode is the 2D mode, the virtual reality device 500 does not need to segment each frame of image, and directly displays the picture content on the display panel. Therefore, in this embodiment, after the screen projection data is obtained, by comparing whether the media asset type is adapted to the current play mode, the screen projection data is adapted to the current play mode of the virtual reality device 500, so as to complete rendering according to the adapted rendering mode, avoid an incorrect rendering mode, and improve the play quality.
It should be noted that, when the virtual reality device 500 matches the current media asset URL address and/or the media asset title in the server, the appropriate film source address may not be matched in some cases, that is, there is no film source corresponding to other play modes in the server for some screen-casting media assets, and therefore, when the film source address adapted to the current play mode is not matched in the server, the virtual reality device 500 may also automatically switch the play mode to adapt to the screen-casting data.
After acquiring the specified film source address, the virtual reality device 500 may acquire the media asset data adapted to the current play mode through the film source address to play the screen recording picture. As shown in fig. 12, in order to be able to play media asset data, the virtual reality device 500 may perform the following steps in the step of accessing the film source address to play the screen shot:
s410: acquiring a video data stream from the slice source address;
s420: displaying the video data stream in a virtual rendering scene;
s430: performing image shooting on the virtual rendering scene to generate a screen projection picture;
s440: and sending the screen projection picture to the display for displaying.
The virtual reality device 500 may obtain a video data stream from the slice source address by accessing the slice source address, and display the video data stream in the rendering scene. The virtual rendering scene includes a left display camera and a right display camera, and the virtual reality device 500 may perform image shooting on the virtual rendering scene with the video data stream through the left display camera and the right display camera, so as to generate a screen projection picture, and finally, the screen projection picture is sent to the display, so as to display screen projection picture content through the display.
For the process of displaying the video data stream, the virtual reality device 500 may obtain the multi-frame asset picture by decoding the video data stream. And then adding a display panel in the virtual rendering scene according to the current playing mode so as to display the media asset pictures frame by frame through the loaded display panel.
In different playing modes, the display panel loaded into the virtual rendering scene is different in form. When the play mode is the 3D mode, the virtual reality device 500 may add a left panel and a right panel in the virtual rendering scene. Wherein the left panel is visible to the left display camera and the right panel is visible to the right display camera. The virtual reality device 500 may slice each frame of the video data stream to obtain a left-eye image portion and a right-eye image portion, and display the left-eye image portion on the left panel and the right-eye image portion on the right panel. And simultaneously shooting the virtual rendering scene through the left display camera and the right display camera to obtain the rendering scene image with the video stream picture.
Since the left panel is only visible to the left display camera, the image frame shot by the left display camera includes the image content of the left eye image portion, and similarly, the right panel is only visible to the right display camera, so that the image frame shot by the right display camera includes the image content of the right eye image portion. Shooting is carried out simultaneously through the left display camera and the right display camera, so that the finally presented screen projection picture has a 3D effect in a video data stream and a 3D effect in a rendered scene, and better immersion experience is obtained.
If the current play mode is the panoramic mode, the virtual reality device 500 may add a curved panel to the virtual rendering scene, and the shape of the added curved panel is different according to the panoramic form of each frame of picture in the panoramic picture source. For example, when the panorama source is of the 180 ° panorama type, a hemispherical curved panel may be added in the rendered scene; when the panorama source is of a 360-degree panorama type, a spherical curved surface panel can be added in a rendered scene; when the panoramic film source is of a fisheye panoramic type, an annular curved panel can be added to the rendered scene.
In order to output the final image picture, the curved panel added in the rendered scene should take the midpoint position between the left display camera and the right display camera as the arc center. For example, in the 360 ° panoramic playing mode, the virtual reality device 500 may display a video image on an inner surface of a 3D ball model in a scene, where the 3D ball model takes a middle position of a left display camera and a right display camera as a center of a sphere, and a radius of the 3D ball model may be defined according to a shooting range of each frame of panoramic image in a video data stream, so as to display a panoramic image picture completely on the 3D ball model, thereby implementing playing of a 360 ° panoramic media picture.
In some exemplary embodiments, in order to enable the virtual reality device 500 to match the film source address in the database, the server may also create a database in advance, wherein the database is created based on the MyBatis framework, as shown in fig. 13, including the following steps:
s501: reading a mapping relation table by reading a reader object;
s502: acquiring the SQL session of the current thread to open a transaction;
s503: reading the operation number in the mapping relation table through the SQL conversation and reading an SQL statement;
s504: committing the transaction to create the database.
The mapping relation table includes a slice source label, a slice source address and a mapping relation between the slice source label and the slice source address. The server may create a database in the initial control system for recording the mapping between the playback mode and the film source type.
In the process of matching the film source address, since one media asset item can support multiple play modes, a mapping relationship of "one-to-many"/"many-to-one" exists between the media asset item and the play modes. The one-to-many mapping relation means that one picture/video has multiple play modes from the viewpoint of picture/video resources, namely one-to-many; the many-to-one mapping relationship means that, from the aspect of the play modes, a plurality of play modes correspond to one picture/video resource, that is, many-to-one. Based on this one-to-many/many-to-one mapping, a database may be created through a mapping framework. The frameworks capable of creating the Database include an object relational mapping framework (Hibernate) of open source code, Java Database Connectivity (JDBC), and a persistent layer framework (MyBatis).
The MyBatis framework has an interface binding function, namely, the interface binding function comprises an annotation bound Structured Query Language (SQL) and an extensible markup Language binding SQL. Also, the MyBatis framework supports Object Graph Navigation Language (OGNL) expression dynamic structured query Language. Therefore, the MyBatis framework can flexibly configure SQL sentences to be run through an XML or annotation mode, map the java objects and the SQL sentences to generate SQL which is finally executed, and finally remap the SQL execution results to generate java objects. The learning threshold of the Mybatis framework is low, a database maintenance party can directly compile original SQL, the SQL execution performance can be strictly controlled, and the flexibility is high.
In the process of creating and maintaining the database by using the Mybatis frame, a Mybatis frame mapping file, namely a mapping relation table, can be read through a Reader object in the Mybatis frame, a SqlSessionsFactorBuilder object is created through the SqlSessionsFactorBuilder object, and the SQLSession of the current thread is obtained. After the SQLSession is obtained, the transaction default starting can be set, so that the operation number in the mapping file is read through the SQLSession, the SQL statement is read, the transaction is submitted, and the mapping relation table is stored in the database.
It can be seen that, in this embodiment, a one-to-many/many-to-one mapping relationship between a media asset item and a play mode can be quickly established through the MyBatis frame and the mapping relationship table, and the database based on the MyBatis frame not only can reduce the workload and the learning content of developers, but also is convenient for uniformly maintaining the media asset items in different play interfaces, so that the virtual reality device 500 can quickly query the corresponding play parameters through the database.
The database is created based on the Mybatis framework, and the virtual reality device 500 may extract the mapping relationship table from the database after extracting the screen casting media information, then query the film source tag from the mapping relationship table according to the current media resource URL address and/or the media resource title, and finally select the URL address adapted to the current play mode through the film source tag, that is, obtain the film source address adapted to the specified play mode.
In the above embodiment, after receiving the screen projection data, the virtual reality device 500 may match the film source addresses in different play modes through the server, so as to obtain multiple film source types. In order to obtain better interaction experience and improve the interaction efficiency of screen projection connection, the virtual reality device 500 may play the screen projection data according to the 2D mode after receiving the screen projection data, so as to retain the interaction effect of the traditional screen projection connection. Meanwhile, the mode of switching the window is displayed in the display interface, and the user is prompted to switch to other play modes, so that a better screen projection interaction effect is obtained.
And when the user determines to switch to other play modes, accessing the media asset addresses corresponding to the other play modes to the server, and displaying the media asset resources corresponding to the mode on the display.
That is, as shown in fig. 14, in some exemplary embodiments, the step of accessing the film source address to play the screen shot further includes:
s451: controlling a display to display a switching window according to the film source address;
s452: receiving a switching instruction input by a user through the switching interface;
s453: responding to the switching instruction, and switching a playing mode;
s454: and accessing the film source address to play the screen projection picture according to the switched mode.
In this embodiment, after receiving the screen projection data, the virtual reality device 500 may play the screen projection data according to the type of the media asset source specified in the screen projection media asset information, that is, a screen projection picture of the 2D media asset type is presented in a 2D mode. Meanwhile, the virtual reality device 500 may further perform a step of acquiring a film source address adapted to at least one play mode in the server by using the screen casting media asset information in a background operation manner, so as to match the film source corresponding to the other play mode.
When the virtual reality device 500 queries the film source addresses in different play modes in the server, the switching window may be displayed according to the matched film source address. The switching window can be used for prompting that the current screen casting media resources have film sources corresponding to different playing modes, so that when a user wants to experience different playing modes, the switching of the playing modes is completed through the switching window. To achieve the prompting effect, the switching window may include prompting information. For example, as shown in fig. 15 and 16, a switching window may be displayed in the current interface while 2D screen projection data is played, and the switching window may include the following prompt text: "detect this video contains 3D (360) mode film source, whether switch 3D (360) mode and broadcast" and reveal "Yes" and "No" two choices in the switching window, for the user's choice.
For this, while the switching window is displayed, the virtual reality apparatus 500 may also receive a switching instruction for switching the play mode, which is input by the user. And responding to the switching instruction, switching the playing mode, and simultaneously accessing the matched film source address so as to play the screen-casting picture according to the switched playing mode. For example, when the user clicks the "yes" option on the switching window, it represents that a switching instruction is input. The virtual reality apparatus 500 may switch the play mode to the 3D mode, i.e., add the left and right panels in the rendered scene, in response to the switching instruction. Meanwhile, according to the matched 3D film source address, the 3D type media asset data can be obtained by accessing the film source address, and the obtained media asset data is played according to the 3D mode so as to be switched to the 3D mode.
Obviously, the user can also input a control instruction for not switching the play mode based on the switching window. For example, the user may click a "no" option on the switching window to control the virtual reality device 500 not to switch the play mode, but to play the screen projection data in the 2D mode.
In order to more intuitively experience the switched viewing effect and guide the user to complete mode switching, in some exemplary embodiments, the switching window may further include a preview screen, and the preview screen may present a media asset screen or a poster screen according to a corresponding film source type. For example, when the current screen projection data matched in the server has a 3D type film source, the 3D type media asset data can be acquired by accessing the 3D film source address, and the media asset data is played according to the 3D mode to acquire a 3D media asset picture. And displaying the 3D media asset picture through a preview window so that a user can experience a film watching effect in a 3D mode. Obviously, in order to reduce the data processing amount, the preview window may preview only a part of the video segments of the asset data in different modes, for example, only the first 5s video segments of the asset data.
The virtual reality apparatus 500 may also receive a switching instruction input by a user through the switching window while displaying the switching window with the preview screen. For example, as shown in fig. 17, the switching window may include only the preview screen and the close button, and when the user clicks any region of the switching window other than the close button, a switching instruction is input on behalf of the user. At this time, the playing mode of the virtual reality device 500 may be switched to the corresponding playing mode according to the switching manner in the above embodiment. And when the user clicks the close button, the close command is input. After receiving a closing instruction input by the user through the preview window, the virtual reality device 500 analyzes the screen projection data in response to the closing instruction to obtain a screen projection picture, and plays the screen projection picture according to the 2D play mode.
In other embodiments of the present application, a presentation mode of an intelligent terminal, such as a mobile phone, a tablet, a computer, etc., is a 2D mode, and a media resource played on the intelligent terminal is a 2D resource. Therefore, the intelligent terminal generally delivers a media asset picture related to the 2D resource. The virtual reality device 500 may access the URL link through the server after receiving the screen-casting URL link, and display a film source corresponding to the URL on the display. The virtual display device 500 obtains the media resource of the screen projection link related to the 2D resource, and presents the 2D media resource on the main interface of the display. So as to keep the interaction effect of the traditional screen projection connection.
Further, after receiving a screen-casting link related to the 2D resource sent by the mobile terminal, the server parses the link, and sends a preview screen or a poster screen corresponding to the 3D source to the virtual display device 500 when the 3D source corresponding to the link is matched in the server. When the user selects to watch the 3D film source, the playing mode of the player is switched to the 3D mode, and the picture related to the 3D film source is played on the main interface.
That is, as shown in fig. 18, in some exemplary embodiments, the step of playing the screen projection further includes:
s601: acquiring a control instruction which is input by a user and used for establishing screen throwing connection and controlling throwing media resource;
s602: responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media asset information;
s603: controlling a display to present a screen casting media asset picture on a main interface in a 2D mode and present a recommendation picture on a secondary interface; the recommended pictures are entrance pictures of recommended resources which are sent by a server and are different from the 2D mode, and the recommended resources are associated with the media resource;
s604: and in response to the selection of the recommendation screen by the user, controlling a presentation mode different from the 2D mode on a main interface of the display to present the recommended resource.
In this embodiment, after receiving the screen-casting data, the virtual reality device 500 actually includes a URL data, the virtual display device requests the server for the screen-casting media resource based on the URL data, and after receiving the screen-casting media resource sent by the server, the virtual display device may play the screen-casting media resource according to the media resource type specified in the screen-casting media resource information, that is, first present a screen-casting picture of the 2D film source type on the main interface through the 2D mode.
Meanwhile, the server analyzes the film source address information, such as a film source name and the like, and inquires whether film sources in other play modes, such as 3D, panorama and the like, exist according to the film source address information, if the film sources in other play modes exist, and when the screen projection target is detected to be the virtual display device 500, a recommendation picture, such as a preview picture and the like, of the film sources in other play modes (hereinafter, the 3D film source is taken as an example) is sent to the virtual display device 500, and the preview picture can present a media asset picture or a poster picture according to the corresponding film source type and an entry link of the film sources in other play modes and is displayed on a secondary interface at the side of the main interface.
After receiving the recommended picture, the virtual display device 500 presents the recommended picture on the sub interface, and at the same time, the main interface still normally plays the screen-shot picture of the 2D film source.
When the user selects the preview picture or poster picture corresponding to the 3D film source through an air mouse, a gesture or other selection modes, the main interface is switched from the playing state of the 2D film source to the playing state of the 3D film source, and a picture link corresponding to the 3D film source is accessed to the server so as to present the recommended content on the main interface. At the same time, the recommended screen of the sub interface disappears.
In order to enable a user to experience the switched viewing effect more intuitively, the user is guided to complete mode switching, and the immersive experience of the user is not affected. And after receiving the recommendation picture sent by the server, displaying only one preset time, such as one minute, and receiving the selection of the user on the auxiliary interface within the minute and switching to the main interface to play the 3D film source. After one minute, the recommended picture on the auxiliary interface is hidden, and the screen casting media asset picture is only displayed on the main interface, so that the user can better focus on the media asset playing of the main interface. And when the user triggers the selection instruction or the control instruction again, the recommended picture of the sub-interface and the content of the service classification expansion area can be displayed again.
In this embodiment, since the virtual display device 500 can be distinguished from a display device that displays only 2D film sources, viewing in 3D, 360 degree, and panoramic modes can be performed. And the screen projection of the mobile equipment is only suitable for the scene of the 2D film source, so that when the screen projection picture is the 2D film source, in order to provide a more three-dimensional film viewing mode for the virtual reality equipment, a request can be sent to the server when the screen projection is the 2D film source, or a link or an entrance of the 3D film source issued by the server can be directly received, a user can know the 3D film source which can be viewed when the user views the 2D film source, and after the user clicks to view, the user can directly jump to the 3D film source to view. The user is provided with a better screen projection experience, and the viewing and the shadow of the virtual display device 500 are more improved.
Based on the virtual reality device 500, some embodiments of the present application further provide a screen-casting media asset playing method, which includes the following steps:
s1: acquiring a control instruction which is input by a user and used for establishing screen projection connection;
s2: responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media resource information;
s3: acquiring a film source address corresponding to at least one play mode from a server according to the screen casting media asset information;
s4: and accessing the film source address to control the display to display a media resource picture corresponding to the film source address.
According to the technical scheme, the screen casting media asset playing method provided by the embodiment can receive screen casting data after receiving the control instruction of the user, and extract screen casting media asset information from the screen casting data. And then, using the screen-casting media asset information to acquire film source addresses in different playing modes in the server, so as to acquire media asset data in a specified playing mode by accessing the film source addresses, and playing screen-casting pictures. The method can acquire the media asset data in the appointed playing mode while displaying the screen projection data so as to play the media asset content projected by the intelligent terminal in a 3D or panoramic mode, and solves the problems that the playing mode is single and the immersive effect is not beneficial to obtaining.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments that can be extended by the solution according to the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to present a user interface including a centrally disposed primary interface and a secondary interface located to the side of the primary interface; (ii) a
The communicator is configured to establish screen projection connection with the intelligent terminal and perform data interaction with the server; the display mode of the intelligent terminal is a 2D mode;
a controller configured to:
acquiring a control instruction which is input by a user and used for establishing screen throwing connection and controlling throwing media resource;
responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media resource information;
controlling a display to present a screen casting media asset picture on a main interface in a 2D mode and present a recommendation picture on a secondary interface; the recommended pictures are entrance pictures of recommended resources which are sent by a server and are different from the 2D mode, and the recommended resources are associated with the media resource;
and in response to the selection of the recommendation screen by the user, controlling a presentation mode different from the 2D mode on a main interface of the display to present the recommended resource.
2. The virtual reality device of claim 1, wherein the screen casting information includes a URL address of a current screen casting, and the controller is further configured to, in response to the control instruction, receive screen casting data sent by the intelligent terminal, where the screen casting data includes a screen casting information and a screen casting screen before controlling the display to present the screen casting screen in the 2D mode on the main interface:
the virtual reality equipment acquires screen casting media resource from a server based on the URL address of the current media resource, and the display form of the screen casting media resource is a 2D mode.
3. The virtual reality device of claim 2, wherein the controller is further configured to: the display presents the screen casting media asset picture in a 2D mode on the main interface, receives a recommended picture which is issued by the server and is associated with recommended resources with presentation modes different from the 2D mode, and presents the recommended picture on the auxiliary interface.
4. The virtual reality device of claim 1, wherein the controller performs the controlling of the recommended resource to be presented on the main interface of the display in a presentation mode different from the 2D mode in response to the selection of the recommended screen by the user, and specifically includes:
and responding to the selection of the user on the recommendation screen, acquiring the recommendation resource with a presentation mode different from the 2D mode from a server, switching the player to be consistent with the presentation mode of the recommendation resource, and controlling the main interface of the display to present the recommendation resource in the presentation mode different from the 2D mode.
5. A server for providing screen projection service for virtual reality equipment is characterized in that,
after screen casting media asset information sent by an intelligent terminal is received, screen casting media assets corresponding to the screen casting media asset information are sent to the virtual reality equipment, wherein the optimal playing mode of the screen casting media assets is a 2D mode;
analyzing the screen casting media asset information, and matching recommended resources with the optimal play mode corresponding to the screen casting media asset being 3D in a resource library;
and sending a recommendation picture corresponding to the recommended resource to the virtual reality equipment.
6. The server according to claim 1,
and receiving a request for acquiring recommended resources, and sending recommended media resources to the virtual reality equipment.
7. A screen projection method is characterized in that,
acquiring a control instruction which is input by a user and used for establishing screen throwing connection and controlling throwing media resource;
responding to the control instruction, receiving screen projection data sent by the intelligent terminal, wherein the screen projection data comprises screen projection media asset information;
controlling a display to present a screen casting media asset picture on a main interface in a 2D mode and present a recommendation picture on a secondary interface; the recommended picture is an entrance picture of recommended resources which are sent by a server and are different from the 2D mode, and the recommended resources are associated with the media resources;
and in response to the selection of the recommendation screen by the user, controlling a presentation mode different from the 2D mode on a main interface of the display to present the recommended resource.
8. The screen casting method according to claim 7, wherein the screen casting media information includes a URL address of the current media asset, the screen casting data sent by the intelligent terminal is received in response to the control instruction, and after the screen casting data includes the screen casting media information, the method further includes:
the virtual reality equipment acquires screen casting media resource from a server based on the URL address of the current media resource, and the display form of the screen casting media resource is a 2D mode.
9. The screen projection method of claim 2, further comprising: the display presents the screen casting media asset picture in a 2D mode on the main interface, receives a recommended picture which is issued by the server and is associated with recommended resources with presentation modes different from the 2D mode, and presents the recommended picture on the auxiliary interface.
10. The screen-projection method of claim 1, wherein the controlling, in response to the selection of the recommendation screen by the user, the recommended resource to be presented on a main interface of the display in a presentation mode different from the 2D mode comprises:
and responding to the selection of the user on the recommendation screen, acquiring the recommendation resource with a presentation mode different from the 2D mode from a server, switching the player to be consistent with the presentation mode of the recommendation resource, and controlling the main interface of the display to present the recommendation resource in the presentation mode different from the 2D mode.
CN202110325049.4A 2021-01-18 2021-03-26 Virtual reality equipment and screen-casting media asset playing method Pending CN115129280A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110325049.4A CN115129280A (en) 2021-03-26 2021-03-26 Virtual reality equipment and screen-casting media asset playing method
PCT/CN2021/137059 WO2022151882A1 (en) 2021-01-18 2021-12-10 Virtual reality device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110325049.4A CN115129280A (en) 2021-03-26 2021-03-26 Virtual reality equipment and screen-casting media asset playing method

Publications (1)

Publication Number Publication Date
CN115129280A true CN115129280A (en) 2022-09-30

Family

ID=83374599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110325049.4A Pending CN115129280A (en) 2021-01-18 2021-03-26 Virtual reality equipment and screen-casting media asset playing method

Country Status (1)

Country Link
CN (1) CN115129280A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795316A (en) * 2023-08-24 2023-09-22 南京维赛客网络科技有限公司 Method, system and storage medium for playing pictures in scene in small window during screen projection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795316A (en) * 2023-08-24 2023-09-22 南京维赛客网络科技有限公司 Method, system and storage medium for playing pictures in scene in small window during screen projection
CN116795316B (en) * 2023-08-24 2023-11-03 南京维赛客网络科技有限公司 Method, system and storage medium for playing pictures in scene in small window during screen projection

Similar Documents

Publication Publication Date Title
CN110636353B (en) Display device
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
CN111510753B (en) Display device
CN112732089A (en) Virtual reality equipment and quick interaction method
CN112073798B (en) Data transmission method and equipment
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN112073770B (en) Display device and video communication data processing method
CN112399263A (en) Interaction method, display device and mobile terminal
CN111385631B (en) Display device, communication method and storage medium
WO2022151882A1 (en) Virtual reality device
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
WO2022193931A1 (en) Virtual reality device and media resource playback method
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
WO2020248682A1 (en) Display device and virtual scene generation method
CN114286077A (en) Virtual reality equipment and VR scene image display method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN114283055A (en) Virtual reality equipment and picture display method
CN114363705A (en) Augmented reality equipment and interaction enhancement method
CN116132656A (en) Virtual reality equipment and video comment display method
CN116126175A (en) Virtual reality equipment and video content display method
CN116069974A (en) Virtual reality equipment and video playing method
CN116266090A (en) Virtual reality equipment and focus operation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination