CN114327033A - Virtual reality equipment and media asset playing method - Google Patents

Virtual reality equipment and media asset playing method Download PDF

Info

Publication number
CN114327033A
CN114327033A CN202110280647.4A CN202110280647A CN114327033A CN 114327033 A CN114327033 A CN 114327033A CN 202110280647 A CN202110280647 A CN 202110280647A CN 114327033 A CN114327033 A CN 114327033A
Authority
CN
China
Prior art keywords
media asset
playing
data
film source
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110280647.4A
Other languages
Chinese (zh)
Inventor
郑美燕
孟亚州
王大勇
姜璐珩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110280647.4A priority Critical patent/CN114327033A/en
Priority to PCT/CN2022/078018 priority patent/WO2022193931A1/en
Publication of CN114327033A publication Critical patent/CN114327033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides virtual reality equipment and a media asset playing method, wherein the media asset playing method can extract key information from a control instruction after receiving the control instruction of a user, and query playing data in a database by using the key information, so that media asset data are displayed according to the playing data. The media asset playing method can automatically select the corresponding playing mode after the user specifies the film source type, and does not need manual operation of the user. In addition, the media asset playing method can automatically select the appropriate film source type after the user specifies the playing mode, so that the user can experience the effects in different playing modes.

Description

Virtual reality equipment and media asset playing method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a media asset playing method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving a person a sense of environmental immersion. A virtual reality device is a device that uses virtual display technology to present a virtual picture to a user. Generally, a virtual reality device includes two display screens for presenting virtual picture contents, corresponding to left and right eyes of a user, respectively. When the contents displayed by the two display screens are respectively from the images of the same object from different visual angles, the stereoscopic viewing experience can be brought to the user.
The virtual reality device can play multimedia resources of various film source types, such as 2D film sources, 3D film sources, panoramic film sources, and the like. Different film source types require different play modes, i.e. 2D mode, 3D mode, and panorama mode, etc. When the user uses the virtual reality device, the user can select a suitable mode, so that the virtual reality device can display corresponding media asset picture content in a playing interface.
In order to select the suitable play mode, if there is only one film source type of the media asset data, the used play mode needs to be manually selected, i.e. the most suitable play mode is confirmed by observing the play effect in different modes. The playing method is not only troublesome in operation, but also suitable for the condition that one piece of media asset data only has one proper playing mode, and partial media asset data can not experience the effects under different playing modes.
Disclosure of Invention
The application provides virtual reality equipment and a media asset playing method, and aims to solve the problem that a traditional playing method cannot automatically select a playing mode and a film source type.
In one aspect, the present application provides a virtual reality device, comprising: a display and a controller, wherein the display is configured to display a playback interface and other user interfaces; the controller is configured to perform the following program steps:
receiving a control instruction which is input by a user and used for playing media asset data;
extracting key information from the control instruction in response to the control instruction;
using the key information to inquire playing data in a database, wherein the playing data comprises playing modes and/or film source types, and the database comprises mapping relations between a plurality of film source types and a plurality of playing modes;
and controlling the display to display the media asset data in the playing interface according to the playing data.
On the other hand, the present application further provides a media asset playing method, which is applied to the virtual reality device, and the media asset playing method includes the following steps:
receiving a control instruction which is input by a user and used for playing media asset data;
extracting key information from the control instruction in response to the control instruction;
using the key information to inquire playing data in a database, wherein the playing data comprises playing modes and/or film source types, and the database comprises mapping relations between a plurality of film source types and a plurality of playing modes;
and controlling the display to display the media asset data in the playing interface according to the playing data.
According to the technical scheme, the virtual reality equipment and the media asset playing method can extract the key information from the control instruction after receiving the control instruction of the user, and query the playing data in the database by using the key information, so that the media asset data can be displayed according to the playing data. The media asset playing method can automatically select the corresponding playing mode after the user specifies the film source type, and does not need manual operation of the user. In addition, the media asset playing method can automatically select the appropriate film source type after the user specifies the playing mode, so that the user can experience the effects in different playing modes.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the present application;
FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the application;
FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application shortcut operation entry area of a global interface in an embodiment of the present application;
FIG. 5 is a schematic diagram of a suspension of a global interface in an embodiment of the present application;
FIG. 6 is a schematic diagram of a playback interface in an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a partition of a playing interface region in an embodiment of the present application;
FIG. 8 is a schematic diagram of a mode switching interface according to an embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating a media asset playing method according to an embodiment of the present application;
FIG. 10 is a flow chart illustrating database maintenance in an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a database creation process according to an embodiment of the present application;
FIG. 12 is a schematic flow chart of creating a database by using a MyBatis framework in the embodiment of the present application;
FIG. 13 is a flowchart illustrating a process of identifying a media asset source type according to an embodiment of the present application;
FIG. 14 is a flowchart illustrating a process of querying broadcast data according to an embodiment of the present application;
fig. 15 is a schematic flowchart illustrating a procedure of invoking media asset data in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In the embodiment of the present application, the virtual Reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including but not limited to VR glasses, Augmented Reality (AR) devices, VR game devices, mobile computing devices, other wearable computers, and the like. The technical solutions of the embodiments of the present application are described by taking VR glasses as an example, and it should be understood that the provided technical solutions can be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or may be connected to other intelligent display devices as an external device, where the display devices may be smart televisions, computers, tablet computers, servers, and the like.
The virtual reality device 500 may be worn behind the face of the user, and display a media image to provide close-range images for the eyes of the user, so as to provide an immersive experience. To present the asset display, virtual reality device 500 may include a number of components for displaying the display and facial wear. Taking VR glasses as an example, the virtual reality device 500 may include, but is not limited to, at least one of a housing, a position fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and the like. In practical application, the optical system, the display component, the posture detection circuit and the interface circuit can be arranged in the shell to present a specific display picture; the two sides of the shell are connected with the position fixing pieces so as to be worn on the head of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to a processing element such as a controller, and the processing element can adjust specific picture content in the display assembly according to the detected gesture data.
In some embodiments, the virtual reality device 500 shown in fig. 1 may access the display device 200, and construct a network-based display system with the server 400, and data interaction may be performed among the virtual reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific picture content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the virtual reality device 500 also perform data communication with the server 400 by a plurality of communication methods. The display device 200 and the virtual reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner such as bluetooth, infrared, etc. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may directly transmit the control command data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as a handle in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. In order to present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because the user can observe the picture content by the left and right eyes, the user can observe a display picture with strong stereoscopic impression when wearing the glasses.
The optical system in the virtual reality device 500 is an optical module consisting of a plurality of lenses. The optical system is arranged between the eyes of a user and the display screen, and can increase the optical path through the refraction of the lens on the optical signal and the polarization effect of the polaroid on the lens, so that the content displayed by the display assembly can be clearly displayed in the visual field range of the user. Meanwhile, in order to adapt to the eyesight of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance between the lenses is changed, the optical path is changed, and the definition of a picture is adjusted.
The interface circuit of the virtual reality device 500 may be configured to transmit interactive data, and in addition to the above-mentioned transmission of the gesture data and the display content data, in practical applications, the virtual reality device 500 may further connect to other display devices or peripherals through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output a displayed screen to the display device in real time for display. As another example, the virtual reality device 500 may also be connected to a handle via an interface circuit, and the handle may be operated by a user's hand, thereby performing related operations in the VR user interface.
Wherein the VR user interface may be presented as a plurality of different types of UI layouts according to user operations. For example, the user interface may include a global UI, as shown in fig. 2, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring the TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include services with media asset contents such as 2D movies, education courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program application, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of the media assets and the titles, as shown in FIG. 3.
In some embodiments, a status bar may be further disposed at the top of the recommended content area 1, and a plurality of display controls may be disposed in the status bar, including common options such as time, network connection status, and power amount. The content included in the status bar may be customized by the user, for example, content such as weather, user's head portrait, etc. may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on the time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be directly displayed as specific time text information, and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining condition of the virtual reality device 500.
The status bar is used to enable the user to perform common control operations, enabling rapid setup of the virtual reality device 500. Since the setup program for the virtual reality device 500 includes many items, all commonly used setup options are typically not displayed in their entirety in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion option is selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further set in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "quick center" option may be set in the expansion window. After the user clicks the shortcut center option, the virtual reality device 500 may display a shortcut center window. The shortcut center window may include "screen capture", "screen recording", and "screen projection" options for waking up corresponding functions, respectively.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting the configuration of an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some embodiments, the service class extension area 2 may include the content of: movie & TV, education, tourism, application, my. In some embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of configuration, as shown in fig. 3.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 4.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to an application or displays a designated function page after receiving the confirmation operation, as shown in fig. 5. In some embodiments, the suspension may not be configured with jump links, and is used solely for image presentation.
In some embodiments, the global UI further comprises a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and holding a focus and dragging.
The user may enter different scene interfaces through the global interface, for example, as shown in fig. 6 and 7, the user may enter the playing interface at the entry of the "playing interface" in the global interface, or start the playing interface by selecting any one of the assets in the global interface. In the playing interface, the virtual reality device 500 may create a 3D scene through the Unity 3D engine and render specific screen content in the 3D scene.
In the playing interface, a user can watch specific media asset content, and in order to obtain better viewing experience, different virtual scene controls can be further arranged in the playing interface so as to cooperate with the media asset content to present specific scenes or realize real-time interaction. For example, in a playing interface, a panel control may be loaded in a Unity 3D scene to present picture content, and then be matched with other home virtual controls to achieve the effect of a cinema screen.
The virtual reality device 500 may present the operation UI content in a play interface. For example, a asset list UI control may be displayed in front of the display panel in the Unity 3D scene, a asset icon stored locally by the current virtual reality device 500 may be displayed in the asset list, or a network asset icon playable in the virtual reality device 500 may be displayed. The user can select any icon in the media asset list, the media asset data corresponding to the icon is played, and the selected media asset can be displayed in real time in the display panel.
The assets that can be displayed in the Unity 3D scene can be in various forms such as pictures, videos, and the like, and due to the display characteristics of the VR scene, the assets displayed in the Unity 3D scene include at least 2D pictures or videos, 3D pictures or videos, panoramic pictures or videos, and the like.
The 2D picture or video is a traditional picture or video file, and when displaying, the same image can be displayed in two display screens of the virtual reality device 500, and the 2D picture or video is collectively referred to as a 2D film source in this application; a 3D picture or video, i.e., a 3D film source, is produced by shooting the same object at different angles by at least two cameras, and can display different images in two displays of the virtual reality device 500, thereby realizing a stereoscopic effect; panoramic pictures or videos, i.e., panoramic film sources, are panoramic images obtained by a panoramic camera or a special shooting means, and the pictures can be displayed in a manner of creating a display sphere in a Unity 3D scene to present a panoramic effect.
For the 3D film source, the film source can be further divided into a left-right type 3D film source, a top-bottom type 3D film source, and the like according to the picture arrangement mode of each frame of image in the film source. Each frame of image of the left and right 3D film source comprises a left part and a right part which are image pictures shot by a left-eye camera and a right-eye camera respectively. For the panoramic film source, the film source forms such as 360-degree panorama, 180-degree panorama and fisheye panorama can be further divided according to the image visual field of the panoramic film source, and the image synthesis modes of each frame of image in different panoramic film source forms are different. In order to present better stereoscopic effect, the panorama source may further include a true panorama type, a left-right panorama type, a top-bottom panorama type, and the like.
Because the media asset data which can be displayed in the playing interface comprises a plurality of film source types and different film source types need different image output modes, a UI control for playing control can be further arranged in the playing interface. For example, a UI control for play control may be disposed in front of the display panel, which is a floating interactive UI control, i.e., the display may be triggered according to a specific trigger action. As shown in fig. 8, in the UI control, a "mode switching" option may be included, and when the user clicks the "mode switching" option, a mode list may be displayed, including mode options of "2D mode", "3D mode", and the like. And after the user selects any mode option in the mode list, the virtual reality equipment can be controlled to play the media asset data in the media asset list according to the selected mode.
Similarly, when the user selects a certain asset item in the list, the playing interface can play the asset item, that is, the picture corresponding to the asset item is displayed on the display panel. In the process of playing the media asset items, the user can also select any mode option in the UI control interface by calling the UI control for playing control, switch the playing mode and play the selected media asset data according to the switched playing mode.
In order to adapt to different playing modes, in some embodiments, one media asset item correspondingly comprises a plurality of forms of data. For example, a portion of the asset items contain both asset data in 2D and asset data in 3D. Namely, one asset item corresponds to two asset files, wherein each frame image in one asset file only comprises one specific picture, and each frame image in the other asset file comprises two specific pictures of the left part and the right part (or the upper part and the lower part). For the media asset item, in the playing process, one media asset file can be correspondingly selected to be played according to different playing modes so as to obtain different effects.
As can be seen, in the process of playing media assets by using the virtual reality device 500, a plurality of film source types correspond to a plurality of playing modes, and a display error is easily generated in the playing process. To this end, in some embodiments of the present application, a virtual reality device 500 is provided, which virtual reality device 500 may include a display and a controller. The display is used for displaying the playing interface and other user interfaces. The controller can automatically select a play mode or a film source type by executing a media asset play method, so that display errors are relieved. Thus, as shown in fig. 9, the controller may be configured to perform the following program steps:
s1: and receiving a control instruction for playing the media asset data input by a user.
The virtual reality device 500 may receive various control commands input by a user in an actual application, and different control commands may implement different functions corresponding to different interaction actions. For example, the control instruction for playing the media asset data may be an interactive action completion input that a user selects to open any picture or video file on the file management interface. Or the user selects the interactive action completion input of opening any picture or video connection in the media asset recommendation interface.
The control instruction of the user for playing the media asset data can be input through other interactive actions. For example, the user may complete the input by clicking on a list of assets in the play interface while the display displays the play interface. For the virtual reality device 500 supporting other interaction modes, the input of the control instruction can be completed through the supported input mode. For example, for the virtual reality device 500 supporting the smart voice system, the input may also be accomplished by inputting voices such as "open × (media data name)", "i want to see ×".
S2: and extracting key information from the control instruction in response to the control instruction.
After receiving a play control instruction input by a user, the virtual reality device 500 may extract key information from the control instruction to automatically select a play mode or a type of a film source according to the control instruction. The key information may include information related to the content specified by the user and information related to the media asset data to be played. For example, the key information may also include a playback mode specified by the user in the playback interface. The key information can also comprise the name of the project of the media asset clicked by the user in the interactive process and the detailed information of the media asset data under the project name, including information such as file description, format extension and the like.
The key information can be used for subsequent automatic selection of a playback mode or a type of a film source, and therefore the content included in the key information should be able to indicate the subsequent automatic playback mode. For example, in the process that a user selects a piece of media asset data of a 3D piece source type to play, the key information should include the selected piece of media asset item information and the piece source type to be played corresponding to the current piece of media asset item is the 3D piece source. In the process of switching the play mode, the key information should include the currently played media asset item information and the switched play mode.
S3: and querying the playing data in a database by using the key information.
After extracting the key information, the virtual reality device 500 may also perform a matching query using the key information to obtain playing data suitable for the content specified in the key information. Wherein the playing data comprises a playing mode or a film source type. According to different key information extracted from the control instruction, the content of the playing data obtained by inquiring is different. For example, when a user selects media asset data of a 3D film source type to play, and the corresponding key information specifies the film source type, the play data includes that the play mode is a 3D mode; when the user controls the play mode of the virtual reality device 500 to switch from the 2D mode to the 3D mode, and the play mode is specified in the corresponding key information, the play data includes a film source type as a 3D film source.
In order to facilitate querying the playing data, a database may be pre-constructed in the virtual reality device 500, and the database may include mapping relationships between multiple film source types and multiple playing modes. All of the film source types and all of the playback modes that can be supported by the current virtual reality device 500 can be included in the database. The database may be invoked when the virtual reality device 500 enters the playback interface in order to perform the query process.
In some embodiments, the database may also include related content identifying the current playback mode or genre of the film source. For example, a file format, file description information, etc. for determining a genre of a film source, and a playback program code, a playback mode tag, etc. for determining a playback mode. Such content may enable the virtual reality device 500 to accurately recognize the current user intent in order to perform automatic selection. And, by these contents, the content of the key information extracted by the virtual reality device 500 can be reduced, that is, the virtual reality device 500 can directly extract the key information which is easy to obtain, and determine the film source type or the play mode specified by the user according to these contents.
S4: and controlling the display to display the media asset data in the playing interface according to the playing data.
After the playing data corresponding to the key information is queried, the virtual reality device 500 may execute a playing program according to the corresponding playing data, so as to display a picture corresponding to the media asset data in the playing interface. For the playing process of the media asset data, the virtual reality device 500 may control the corresponding media asset playing process according to the content specified in the control instruction and the content of the playing data obtained by querying.
For example, the type of the film source of the current media asset data extracted from the control instruction is a left-right type 3D film source, and by executing the above query process, the playback data obtained by the query includes a playback mode corresponding to the current 3D film source as a left-right type 3D mode, and therefore, based on the control instruction and the playback data, it is determined that the playback process is the left-right type 3D film source to execute playback. The method comprises the steps that for each frame of image picture in media asset data, the image picture is split into a left eye image part and a right eye image part by taking a vertical central axis as a boundary, wherein the left eye image is sent to a display panel visible to a left eye camera in a rendering scene to be played, and the right eye image is sent to the display panel visible to a right eye camera in the rendering scene to be played.
It can be seen that, in the above embodiment, after acquiring the control instruction input by the user, the virtual reality device 500 may extract the key information from the control instruction, and query the corresponding playing parameter in the database using the key information, so as to play the media asset data according to the playing parameter. In this way, the virtual reality device 500 can automatically complete the selection of the play mode or the film source type, so that the played media asset data can be played in the most appropriate manner, and the playing effect under multiple scenes can be obtained.
Since the virtual reality device 500 in the above embodiment needs to rely on the database to complete the query and matching of the related content, in some embodiments, the virtual reality device 500 may create the database and update the database in real time according to the content of the currently displayed playing interface, that is, as shown in fig. 10, the step of receiving the control instruction for playing the media asset data input by the user further includes:
s501: creating a database;
s502: traversing the media asset items in the current playing interface;
s503: extracting the film source type correspondingly contained in each media asset item;
s504: setting a suitable playing mode for each film source type to establish a mapping relation table;
s505: and storing the mapping relation table to the database.
The virtual reality apparatus 500 may create a database in the initial control system for recording the mapping relationship between the playback mode and the film source type. The database may be created by the virtual reality device 500 through local data, or may be created by a service provider of the virtual reality device 500 through a cloud service.
Based on the created database, when entering the playing interface each time, the virtual reality device 500 may ensure that information corresponding to the media asset items included in the current playing interface can be queried in the database by traversing the media asset items included in the current playing interface and extracting all film source types corresponding to each media asset item. For example, the current playing interface includes a media asset list, and the media asset list includes a plurality of media asset items, where each media asset item may be associated with a plurality of media asset files, for example, media asset a includes two media asset files, that is, a 2D media asset file and a left-right type 3D media asset file. When entering the playing interface, the film source types correspondingly contained in the media asset A can be extracted as a 2D film source and a left-right type 3D film source.
After extracting the film source types, the virtual reality device 500 may set a corresponding play mode for each extracted film source type, that is, the play mode corresponding to the 2D film source is a 2D mode, and the play modes of the left and right 3D film sources are left and right 3D modes. And establishing a mapping relation table according to the film source type and the playing mode, wherein the mapping relation table comprises media asset file addresses and playing modes of a plurality of film source types.
Thus, by executing the setting process on all media asset items contained in the current playing interface, all film source types and corresponding playing modes in the current interface can be determined and added to the database in the form of a mapping relation table. The database maintenance process can enable the database to contain the mapping relation corresponding to the media asset items in the current interface, so that the film source type or the playing mode can be quickly inquired in the subsequent inquiry process.
In the process of inquiring the playing data, because one media asset item can support a plurality of playing modes, a mapping relation of 'one-to-many'/'many-to-one' exists between the media asset item and the playing modes. The one-to-many mapping relation means that one picture/video has multiple play modes from the viewpoint of picture/video resources, namely one-to-many; the many-to-one mapping relationship means that, from the aspect of the play modes, a plurality of play modes correspond to one picture/video resource, that is, many-to-one. Based on such a one-to-many/many-to-one mapping relationship, a database may be created through a mapping framework. The frameworks capable of creating the Database include an object relational mapping framework (Hibernate) of open source code, Java Database Connectivity (JDBC), and a persistent layer framework (MyBatis).
The MyBatis framework has an interface binding function, namely, the interface binding function comprises an annotation bound Structured Query Language (SQL) and an extensible markup Language binding SQL. Also, the MyBatis framework supports Object Graph Navigation Language (OGNL) expression dynamic structured query Language. Therefore, the MyBatis framework can flexibly configure SQL sentences to be run through an XML or annotation mode, map the java objects and the SQL sentences to generate SQL which is finally executed, and finally remap the SQL execution results to generate java objects. The learning threshold of the Mybatis framework is low, a database maintenance party can directly compile original SQL, the SQL execution performance can be strictly controlled, and the flexibility is high.
Therefore, in some embodiments, there is a one-to-many/many-to-one mapping between the picture/video assets and the playback mode, and the MyBatis framework can be selected to create and maintain the database. That is, as shown in fig. 11 and 12, the step of storing the mapping relationship table in the database further includes:
s551: reading the mapping relation table by reading a reader object;
s552: acquiring the SQL session of the current thread to open a transaction;
s553: reading the operation number in the mapping relation table through the SQL conversation, and reading an SQL statement;
s554: and submitting a transaction so as to store the mapping relation table to the database.
In the process of creating and maintaining the database by using the Mybatis frame, a Mybatis frame mapping file, namely a mapping relation table, can be read through a Reader object in the Mybatis frame, a SqlSessionsFactorBuilder object is created through the SqlSessionsFactorBuilder object, and the SQLSession of the current thread is obtained. After the SQLSession is obtained, the transaction default starting can be set, so that the operation number in the mapping file is read through the SQLSession, the SQL statement is read, the transaction is submitted, and the mapping relation table is stored in the database.
It can be seen that, in this embodiment, a one-to-many/many-to-one mapping relationship between a media asset item and a play mode can be quickly established through the MyBatis frame and the mapping relationship table, and the database based on the MyBatis frame not only can reduce the workload and the learning content of developers, but also is convenient for uniformly maintaining the media asset items in different play interfaces, so that the virtual reality device 500 can quickly query the corresponding play parameters through the database.
In some embodiments, in order to extract the key information from the control instruction, the virtual reality device 500 may obtain the data of the media assets to be played by parsing the control instruction; and extracting key words from the data of the media assets to be played, so that in the process of inquiring the playing data, the film source type or the playing mode can be inquired on the basis of the key words.
Since the virtual reality device 500 may query the play mode based on the film source type or the play mode when querying the play data using the key information, the key words include the film source type of the media asset data to be played or the play mode specified by the user.
For example, after the user clicks and opens one media asset file in the file management interface, the virtual reality device 500 may extract the format of the media asset file, the file description information, and the like to obtain the keyword in the process of switching to the play interface. Through the keyword information, the specified information of the user, namely the film source type of the media asset file can be determined.
For the playing process of automatically selecting the playing mode according to the film source type, since it may not be possible to accurately judge the content specified by the user simply through the keyword information, that is, it may not be possible to determine the film source type of the media asset file, the virtual reality device 500 may also determine the film source type of the media asset data to be played through an image recognition algorithm, that is, as shown in fig. 13, the step of extracting the keyword from the media asset data to be played further includes:
s211: extracting multi-frame image data from the media asset data;
s212: inputting the multi-frame image data into an image recognition model;
s213: and acquiring the film source type output by the image recognition model.
In order to extract the film source type from the media asset data, the virtual reality device 500 may extract the media asset data to be played obtained by parsing in the process of extracting the key information, and extract the multi-frame image data from the media asset data. The multi-frame image data can be a plurality of image frames with equal time difference intervals, namely, the image pictures of the multi-frame image data have obvious difference, so that the subsequent image identification process is more accurate.
After extracting the multiple frames of image data, the multiple frames of image data may be sequentially input into the image recognition model, so as to determine the picture arrangement manner in the image data through the image recognition model. The image recognition model can be a classification model obtained by training sample data according to a machine learning algorithm. Namely, after the initial model is established, the sample image with the label information is input into the model, and the classification result output by the model is obtained. And then the difference between the classification result and the label information is combined, the back propagation is carried out, and the model parameters are adjusted. And a classification model with certain classification accuracy can be obtained through multiple inputs of a large number of samples.
The image recognition model can also be built by means of encapsulating image processing algorithms. That is, in the application, the arrangement of the pictures in the image can be determined by developing a series of image recognition programs and performing image recognition on the input image data through the image recognition programs. For example, the image recognition program may segment the image data to obtain at least two portions, calculate a similarity value of the two portions through a picture similarity algorithm, and determine that the current media asset data is a 3D image when the similarity value is greater than a set threshold.
Through image recognition performed on the multi-frame image data, the virtual reality device 500 may obtain the film source type of the media asset to be played, i.e., obtain the key information. Therefore, the key information acquisition mode can relieve the influence of the content such as file format, description information and the like on the key information, so that the virtual reality equipment 500 can be suitable for the film source type identification process of most media asset files, and the accuracy rate in automatically selecting the play mode is improved.
Since the key information extracted from the control command is different when the user specifies different contents, and the corresponding playing data queried using the key information is also different, as shown in fig. 14, the step of querying the playing data in the database using the key information further includes:
s310: acquiring user specified information in the control instruction;
s320: if the user-specified information is the specified film source type, inquiring a target playing mode in the database;
s330: and if the user specified information is a specified play mode, inquiring the type of the target film source in the database.
After acquiring the control instruction input by the user, the virtual reality device 500 may extract the user-specified information from the control instruction, that is, extract the information explicitly selected by the user when performing the interactive action. For example, when the user selects any one of the left-right type 3D asset items to play, the user specifies that the film source type is a left-right type 3D film source. And when the user selects the option of 'mode switching-3D mode (left and right)' in turn in the playing interface, the user specifies that the playing mode is the left and right type 3D mode.
For different user-specified information, the virtual reality device 500 may query different playback data from the database. If the user-specified information is the specified film source type, inquiring a target playing mode in the database; and if the user-specified information is the specified play mode, inquiring the type of the target film source in the database. The target play mode is a play mode which is adaptive to the appointed play mode, and the target play mode is a play mode which is adaptive to the appointed play mode.
For example, when the user-specified information indicates that the film source type is a left-right type 3D film source, the playback data queried in the database is in a suitable playback mode, i.e., a left-right type 3D mode. And when the user-specified information indicates that the play mode is a left-right type 3D mode, inquiring the addresses of left-right type 3D film sources corresponding to the current media asset item in the database, and calling for playing later.
After querying the play data, the virtual reality device 500 may execute a play program for the asset data. In order to ensure the playing process to proceed smoothly, all the related playing data, such as the media asset item to be played, and the type, playing mode and address of the film source related to the media asset item to be played, should be defined before playing the media asset data. For example, if the user-specified information is a specified play mode, the virtual reality device 500 may first call media asset data of a target film source type, and parse the media asset data according to the specified play mode, that is, the virtual reality device 500 may obtain the media asset data from a film source address and perform operations such as decoding, so as to obtain a video data stream or a specific picture corresponding to the media asset data.
In order to enable the display to display specific picture content, after the media asset data is parsed, the parsed media asset data needs to be sent to a virtual rendering scene, so as to display the specific media asset picture content through a display panel in the rendering scene. Obviously, for different play modes, the rendered scene can be presented in different picture display modes, for example, for a left-right type 3D film source media, a 3D effect can be formed by sending the left part and the right part of each frame image picture to the display panel visible by the left-eye camera and the display panel visible by the right-eye camera respectively, so as to output the media picture to the displays on the left and right sides respectively.
In the process of invoking the media asset data of the target film source type, the virtual reality device 500 may invoke the media asset data in different manners according to different media asset storage forms, that is, as shown in fig. 15, in some embodiments, the step of invoking the media asset data of the target film source type further includes:
s411: detecting a storage form of the media asset data;
s412: if the storage form is local storage, calling the media asset data of the target film source type in a local storage;
s413: if the storage form is network storage, extracting the media resource address of the target film source type;
s414: and accessing the media resource address to acquire the media asset data of the target film source type.
The virtual reality device 500 may detect the storage form of the media asset data through information such as the file size and the file position of the media asset data. Generally, when a user jumps to a playing process of a playing interface from a file management interface, the played media asset data is stored in a local storage mode; and when the user jumps to the playing interface from the media asset recommending interface, the played media asset data is stored in a network storage mode.
In the playing process, if the storage form of the media asset data is local storage, calling the media asset data of the target film source type in a local storage; if the storage form is network storage, the media resource address of the target film source type can be extracted first, and then the media resource data corresponding to the film source type can be obtained by accessing the media resource address.
Obviously, for the same asset project, the asset data corresponding to different asset types may be stored partially in the local and partially in the network. For example, for the same asset item a, its correspondence contains two film source forms, i.e., a 2D form and a 3D form. Wherein the 2D type of asset data is stored in the local memory and the 3D type of asset data is stored in the network memory. When the user calls the media asset data of the media asset item, the type of the film source to be played can be determined through the control instruction and the obtained playing data, and therefore the media asset data can be called according to the corresponding type of the film source. For example, when the type of the designated film source in the query acquired play data is a 3D film source, although the 2D film source of the media asset item is stored in the local storage, the media resource address corresponding to the 3D film source still needs to be accessed to acquire the media asset data.
In some embodiments, if the user-specified information is a specified play mode, a mapping relationship may be established between a asset list and an asset data address in the current play interface, and the mapping relationship may enable a user to automatically access the asset data address in the specified play mode when clicking on an asset item in the asset list, so as to quickly obtain the appropriate asset data. That is, the virtual reality device 500 may extract the media asset list displayed in the current playing interface, traverse the media asset data addresses of each media asset item in the media asset list, and establish a mapping relationship between each media asset item and the media asset data address of the target film source type, so that when a user selects any media asset item in the media asset list, the media asset data of the target film source type is acquired.
Based on the virtual reality device 500, in some embodiments of the present application, a media asset playing method is further provided, including the following steps:
s1: receiving a control instruction which is input by a user and used for playing media asset data;
s2: extracting key information from the control instruction in response to the control instruction;
s3: using the key information to inquire playing data in a database, wherein the playing data comprises playing modes and/or film source types, and the database comprises mapping relations between a plurality of film source types and a plurality of playing modes;
s4: and controlling the display to display the media asset data in the playing interface according to the playing data.
As can be seen from the foregoing technical solutions, in the media asset playing method provided in this embodiment, after receiving a control instruction of a user, key information is extracted from the control instruction, and the playing data is queried in a database by using the key information, so that the media asset data is displayed according to the playing data. The media asset playing method can automatically select the corresponding playing mode after the user specifies the film source type, and does not need manual operation of the user. In addition, the media asset playing method can automatically select the appropriate film source type after the user specifies the playing mode, so that the user can experience the effects in different playing modes.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to display a play interface;
a controller configured to:
receiving a control instruction which is input by a user and used for playing media asset data;
extracting key information from the control instruction in response to the control instruction;
using the key information to inquire playing data in a database, wherein the playing data comprises playing modes and/or film source types, and the database comprises mapping relations between a plurality of film source types and a plurality of playing modes;
and controlling the display to display the media asset data in the playing interface according to the playing data.
2. The virtual reality device of claim 1, wherein prior to the step of receiving a control instruction input by a user for playing media asset data, the controller is further configured to:
creating a database;
traversing the media asset items in the current playing interface;
extracting the film source type correspondingly contained in each media asset item;
setting a corresponding playing mode for each film source type to establish a mapping relation table, wherein the mapping relation table comprises media asset file addresses and playing modes of a plurality of film source types;
and storing the mapping relation table to the database.
3. The virtual reality device of claim 2, wherein in the step of storing the mapping relationship table in the database, the controller is further configured to:
reading the mapping relation table by reading a reader object;
acquiring the SQL session of the current thread to open a transaction;
reading the operation number in the mapping relation table through the SQL conversation, and reading an SQL statement;
and submitting a transaction so as to store the mapping relation table to the database.
4. The virtual reality device of claim 1, wherein in the step of extracting key information from the control instructions, the controller is further configured to:
analyzing the control instruction to acquire media asset data to be played;
and extracting keywords from the media asset data to be played, wherein the keywords comprise the film source type of the media asset data to be played or a playing mode specified by a user.
5. The virtual reality device of claim 4, wherein in the step of extracting keywords from the asset data to be played, the controller is further configured to:
extracting multi-frame image data from the media asset data;
inputting the multi-frame image data into an image recognition model;
and acquiring the film source type output by the image recognition model.
6. The virtual reality device of claim 1, wherein in the step of querying the database for play data using the key information, the controller is further configured to:
acquiring user specified information in the control instruction;
if the user-specified information is a specified film source type, inquiring a target playing mode in the database, wherein the target playing mode is a playing mode which is suitable for the specified film source type;
and if the user-specified information is in a specified play mode, inquiring a target film source type in the database, wherein the target film source type is a film source type which is suitable for the specified play mode.
7. The virtual reality device of claim 6, wherein if the user-specified information is a specified play mode, the controller is further configured to:
calling the media asset data of the target film source type;
analyzing the media asset data according to the appointed playing mode;
and sending the analyzed media asset data to a virtual rendering scene to form a media asset picture.
8. The virtual reality device of claim 7, wherein in the step of invoking the media asset data of the target film source type, the controller is further configured to:
detecting a storage form of the media asset data;
if the storage form is local storage, calling the media asset data of the target film source type in a local storage;
if the storage form is network storage, extracting the media resource address of the target film source type;
and accessing the media resource address to acquire the media asset data of the target film source type.
9. The virtual reality device of claim 6, wherein if the user-specified information is a specified play mode, the controller is further configured to:
extracting a media asset list displayed in a current playing interface;
traversing the media asset data addresses of the media asset items in the media asset list;
and establishing a mapping relation between each media asset item and the media asset data address of the target film source type so as to acquire the media asset data of the target film source type when a user selects any media asset item in the media asset list.
10. A media asset playing method is characterized by being applied to virtual reality equipment, wherein the virtual reality equipment comprises a display and a controller, and the media asset playing method comprises the following steps:
receiving a control instruction which is input by a user and used for playing media asset data;
extracting key information from the control instruction in response to the control instruction;
using the key information to inquire playing data in a database, wherein the playing data comprises playing modes and/or film source types, and the database comprises mapping relations between a plurality of film source types and a plurality of playing modes;
and controlling the display to display the media asset data in the playing interface according to the playing data.
CN202110280647.4A 2021-03-16 2021-03-16 Virtual reality equipment and media asset playing method Pending CN114327033A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110280647.4A CN114327033A (en) 2021-03-16 2021-03-16 Virtual reality equipment and media asset playing method
PCT/CN2022/078018 WO2022193931A1 (en) 2021-03-16 2022-02-25 Virtual reality device and media resource playback method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280647.4A CN114327033A (en) 2021-03-16 2021-03-16 Virtual reality equipment and media asset playing method

Publications (1)

Publication Number Publication Date
CN114327033A true CN114327033A (en) 2022-04-12

Family

ID=81044226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280647.4A Pending CN114327033A (en) 2021-03-16 2021-03-16 Virtual reality equipment and media asset playing method

Country Status (2)

Country Link
CN (1) CN114327033A (en)
WO (1) WO2022193931A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012137A1 (en) * 2022-07-12 2024-01-18 中兴通讯股份有限公司 Navigation method based on virtual reality, and controller and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040078382A1 (en) * 2002-10-16 2004-04-22 Microsoft Corporation Adaptive menu system for media players
US20080091721A1 (en) * 2006-10-13 2008-04-17 Motorola, Inc. Method and system for generating a play tree for selecting and playing media content
CN102377964A (en) * 2010-08-16 2012-03-14 康佳集团股份有限公司 Method and apparatus for picture-in-picture realization in television and corresponded television set
CN103618913A (en) * 2013-12-13 2014-03-05 乐视致新电子科技(天津)有限公司 Method and device for playing 3D film source in intelligent television
CN104683787A (en) * 2015-03-12 2015-06-03 青岛歌尔声学科技有限公司 Method and device for identifying video types, display equipment and video projecting method thereof
CN105323595A (en) * 2015-10-28 2016-02-10 北京小鸟看看科技有限公司 Network based video type identifying method, client and server
CN106534830A (en) * 2016-10-10 2017-03-22 成都斯斐德科技有限公司 Virtual reality-based cinema playing system
CN106559680A (en) * 2016-11-25 2017-04-05 北京小米移动软件有限公司 Video type recognition methodss, device and electronic equipment
CN107027071A (en) * 2017-03-14 2017-08-08 深圳市创达天盛智能科技有限公司 A kind of method and apparatus of video playback
CN108366304A (en) * 2018-01-29 2018-08-03 北京微视酷科技有限责任公司 A kind of audio-visual broadcast control system and method based on virtual reality technology
CN111209440A (en) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 Video playing method, device and storage medium
CN112333509A (en) * 2020-10-30 2021-02-05 Vidaa美国公司 Media asset recommendation method, recommended media asset playing method and display equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110126160A1 (en) * 2009-11-23 2011-05-26 Samsung Electronics Co., Ltd. Method of providing 3d image and 3d display apparatus using the same
JP6492673B2 (en) * 2015-01-15 2019-04-03 セイコーエプソン株式会社 Head-mounted display device, method for controlling head-mounted display device, computer program
CN106792094A (en) * 2016-12-23 2017-05-31 歌尔科技有限公司 The method and VR equipment of VR device plays videos
CN107103638B (en) * 2017-05-27 2020-10-16 杭州万维镜像科技有限公司 Rapid rendering method of virtual scene and model
CN107580244A (en) * 2017-07-31 2018-01-12 上海与德科技有限公司 The method for cutting film source, the equipment and terminal that cut film source
CN110572656B (en) * 2019-09-19 2021-11-19 江苏视博云信息技术有限公司 Encoding method, image processing method, device, system, storage medium and equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040078382A1 (en) * 2002-10-16 2004-04-22 Microsoft Corporation Adaptive menu system for media players
US20080091721A1 (en) * 2006-10-13 2008-04-17 Motorola, Inc. Method and system for generating a play tree for selecting and playing media content
CN102377964A (en) * 2010-08-16 2012-03-14 康佳集团股份有限公司 Method and apparatus for picture-in-picture realization in television and corresponded television set
CN103618913A (en) * 2013-12-13 2014-03-05 乐视致新电子科技(天津)有限公司 Method and device for playing 3D film source in intelligent television
CN104683787A (en) * 2015-03-12 2015-06-03 青岛歌尔声学科技有限公司 Method and device for identifying video types, display equipment and video projecting method thereof
CN105323595A (en) * 2015-10-28 2016-02-10 北京小鸟看看科技有限公司 Network based video type identifying method, client and server
CN106534830A (en) * 2016-10-10 2017-03-22 成都斯斐德科技有限公司 Virtual reality-based cinema playing system
CN106559680A (en) * 2016-11-25 2017-04-05 北京小米移动软件有限公司 Video type recognition methodss, device and electronic equipment
CN107027071A (en) * 2017-03-14 2017-08-08 深圳市创达天盛智能科技有限公司 A kind of method and apparatus of video playback
CN108366304A (en) * 2018-01-29 2018-08-03 北京微视酷科技有限责任公司 A kind of audio-visual broadcast control system and method based on virtual reality technology
CN111209440A (en) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 Video playing method, device and storage medium
CN112333509A (en) * 2020-10-30 2021-02-05 Vidaa美国公司 Media asset recommendation method, recommended media asset playing method and display equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012137A1 (en) * 2022-07-12 2024-01-18 中兴通讯股份有限公司 Navigation method based on virtual reality, and controller and storage medium

Also Published As

Publication number Publication date
WO2022193931A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
CN110636353B (en) Display device
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
CN111970456B (en) Shooting control method, device, equipment and storage medium
KR20160011613A (en) Method and device for information acquisition
CN111615002B (en) Video background playing control method, device and system and electronic equipment
CN112073798B (en) Data transmission method and equipment
CN112732089A (en) Virtual reality equipment and quick interaction method
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN114363705A (en) Augmented reality equipment and interaction enhancement method
WO2022193931A1 (en) Virtual reality device and media resource playback method
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
WO2022151882A1 (en) Virtual reality device
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
CN114286077B (en) Virtual reality device and VR scene image display method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
WO2020248682A1 (en) Display device and virtual scene generation method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
JP2021180473A (en) Method and device for obtaining online picture-book content, and smart screen device
CN114327032B (en) Virtual reality device and VR picture display method
CN111901655B (en) Display device and camera function demonstration method
George Using Object Recognition on Hololens 2 for Assembly
CN114283055A (en) Virtual reality equipment and picture display method
CN116931713A (en) Virtual reality equipment and man-machine interaction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination