WO2022193931A1 - 一种虚拟现实设备及媒资播放方法 - Google Patents

一种虚拟现实设备及媒资播放方法 Download PDF

Info

Publication number
WO2022193931A1
WO2022193931A1 PCT/CN2022/078018 CN2022078018W WO2022193931A1 WO 2022193931 A1 WO2022193931 A1 WO 2022193931A1 CN 2022078018 W CN2022078018 W CN 2022078018W WO 2022193931 A1 WO2022193931 A1 WO 2022193931A1
Authority
WO
WIPO (PCT)
Prior art keywords
media asset
virtual reality
value
user
reality device
Prior art date
Application number
PCT/CN2022/078018
Other languages
English (en)
French (fr)
Inventor
郑美燕
孟亚州
王大勇
姜璐珩
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2022193931A1 publication Critical patent/WO2022193931A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Definitions

  • the present application relates to the technical field of virtual reality, and in particular, to a virtual reality device and a method for playing media assets.
  • Virtual Reality (VR) technology is a display technology that simulates a virtual environment through a computer, thereby giving people a sense of immersion in the environment.
  • a virtual reality device is a device that uses virtual display technology to present virtual images to users.
  • a virtual reality device includes two display screens for presenting virtual picture content, corresponding to the left and right eyes of the user respectively. When the contents displayed on the two display screens come from images of the same object from different viewing angles, a three-dimensional viewing experience can be brought to the user.
  • the virtual reality device can play multimedia resources of various types of film sources, such as 2D film sources, 3D film sources, and panoramic film sources. Different types of film sources require different playback modes, namely 2D mode, 3D mode, and panorama mode.
  • the user can select a suitable mode, so that the virtual reality device can display the corresponding media asset screen content in the playback interface.
  • the present application provides a virtual reality device and a media resource playback method to solve the problem that the traditional playback method cannot automatically select a playback mode and a film source type.
  • the virtual reality device includes: a display and a controller, wherein the display is configured to display a playback interface and other user interfaces; the controller is configured to execute the following program steps:
  • the playback data includes playback modes and/or source types, and the database includes mapping relationships between multiple source types and multiple playback modes;
  • the display is controlled to display the media asset data in the playback interface.
  • the virtual reality device includes: a display; and a controller, configured to execute the following program steps:
  • custom area perform zoom processing on the screen to be displayed to obtain a custom area image
  • the display is controlled to display the custom area image.
  • a method for playing media assets provided by the present application is applied to the above-mentioned virtual reality device, and the method for playing media assets includes the following steps:
  • the playback data includes playback modes and/or source types, and the database includes mapping relationships between multiple source types and multiple playback modes;
  • the display is controlled to display the media asset data in the playback interface.
  • FIG. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the application
  • FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a playback interface in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the division of the playback interface area in the embodiment of the present application.
  • FIG. 6 is a schematic diagram of a mode switching operation interface in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a method for playing media assets in an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of maintaining a database in an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of creating a database in an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of creating a database using the MyBatis framework in the embodiment of the application.
  • FIG. 11 is a schematic flowchart of identifying a media resource source type in an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of calling media asset data in an embodiment of the application.
  • FIG. 14 is a schematic diagram of a rendering scene in an embodiment of the present application.
  • FIG. 15 is a schematic flowchart of a method for displaying a VR screen in an embodiment of the present application.
  • FIG. 16 is a schematic flowchart of generating a custom area in an embodiment of the present application.
  • 17 is a schematic diagram of an input interface in an embodiment of the application.
  • 19 is a schematic flowchart of judging whether the input error value exceeds the error value range in the embodiment of the application.
  • FIG. 20 is a schematic flowchart of performing zoom processing on a to-be-displayed image in an embodiment of the present application.
  • module refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic or combination of hardware or/and software code capable of performing the function associated with that element.
  • the virtual reality device 500 generally refers to a display device that can be worn on the user's face to provide the user with an immersive experience, including but not limited to VR glasses, Augmented Reality (AR), VR game devices, mobile Computing devices and other wearable computers, etc.
  • VR glasses Augmented Reality
  • AR Augmented Reality
  • VR game devices mobile Computing devices and other wearable computers, etc.
  • Some embodiments of the present application describe the technical solution by taking VR glasses as an example. It should be understood that the provided technical solution can be applied to other types of virtual reality devices at the same time.
  • the virtual reality device 500 can run independently, or be connected to other smart display devices as an external device, where the display device can be a smart TV, a computer, a tablet computer, a server, or the like.
  • the virtual reality device 500 can display a media image to provide a close-up image for the user's eyes, so as to bring an immersive experience.
  • the virtual reality device 500 may include a number of components for display and face wear.
  • the virtual reality device 500 may include, but is not limited to, at least one of a casing, a position fixing member, an optical system, a display assembly, a posture detection circuit, an interface circuit, and the like.
  • the optical system, the display component, the attitude detection circuit and the interface circuit can be arranged in the casing to present a specific display screen; the two sides of the casing are connected with position fixings to be worn on the user's head.
  • the attitude detection circuit When in use, the attitude detection circuit has built-in attitude detection elements such as gravitational acceleration sensor and gyroscope. When the user's head moves or rotates, the user's attitude can be detected, and the detected attitude data can be transmitted to the controller, etc.
  • the processing element enables the processing element to adjust the specific screen content in the display assembly according to the detected gesture data.
  • a network-based display system can be constructed between the virtual reality device 500 shown in FIG. 1 and the server 400 , and data interaction can be performed in real time between the virtual reality device 500 and the server 400 .
  • the user can also use the display device 200, the mobile terminal 300, and the remote control 100, and can also directly interact with the virtual reality device 500.
  • the mobile terminal 300 and the remote control 100 can be used as a virtual reality scene.
  • the controller can be used to realize functions such as somatosensory interaction.
  • the display component of the virtual reality device 500 includes a display screen and a driving circuit related to the display screen.
  • the display component may include two display screens, corresponding to the user's left eye and right eye respectively.
  • the content displayed on the left and right screens will be slightly different, and the left and right cameras of the 3D source during shooting can be displayed respectively. Due to the screen content observed by the user's left and right eyes, a display screen with a strong three-dimensional effect can be observed when wearing the device.
  • the optical system in the virtual reality device 500 is an optical module composed of multiple lenses.
  • the optical system is set between the user's eyes and the display screen, which can increase the optical path through the refraction of the optical signal by the lens and the polarization effect of the polarizer on the lens, so that the content displayed by the display component can be clearly displayed in the user's field of vision.
  • the optical system also supports focusing, that is, adjusting the position of one or more of the multiple lenses through the focusing component, changing the mutual distance between the multiple lenses, and thus changing the optical path. Adjust the picture sharpness.
  • the interface circuit of the virtual reality device 500 can be used to transmit interactive data.
  • the virtual reality device 500 can also be connected to other display devices or peripherals through the interface circuit to pass and Data exchange between connected devices to achieve more complex functions.
  • the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output the displayed picture to the display device in real time for display.
  • the virtual reality device 500 may also be connected to a handle through an interface circuit, and the handle may be operated by the user by hand, so as to perform related operations in the VR user interface.
  • the VR user interface can be presented as a variety of different types of UI layouts according to user operations.
  • the user interface may include a global interface, and the global UI after the AR/VR terminal is started is shown in FIG. 2 , and the global UI can be displayed on the display screen of the AR/VR terminal or on the display of the display device. middle.
  • the global UI may include a recommended content area 1 , a business classification extension area 2 , an application shortcut operation entry area 3 , and a suspended object area 4 .
  • Recommended content area 1 is used to configure TAB columns of different categories; in the columns, you can choose to configure media resources, topics, etc.; the media resources can include 2D film and television, education courses, travel, 3D, 360-degree panorama, live broadcast, 4K film and television , program applications, games, travel and other businesses with media content, and the column can choose different template styles, and can support simultaneous recommendation and arrangement of media resources and themes, as shown in Figure 3.
  • the status bar is used to enable the user to perform common control operations, so as to quickly set the virtual reality device 500 . Since the setting procedure for the virtual reality device 500 includes many items, it is usually not possible to display all the commonly used setting options in the status bar. To this end, in some embodiments, the status bar may also be provided with extended options. After the extension option is selected, an extension window may be presented in the current interface, and a plurality of setting options may be further set in the extension window for implementing other functions of the virtual reality device 500 .
  • a "shortcut center” option may be set in the extension window.
  • the virtual reality device 500 may display the shortcut center window.
  • the shortcut center window can include "Screenshot”, “Screen Recording” and “Screencast” options to wake up the corresponding functions respectively.
  • the business classification extension area 2 supports the configuration of extended classifications of different classifications. If there is a new business type, you can configure an independent TAB to display the corresponding page content.
  • the expansion classification in the business classification expansion area 2 can also be sorted and adjusted and offline business operations can be performed.
  • the business classification expands the content that the area 2 can include: film and television, education, travel, application, mine.
  • the service classification extension area 2 is configured to display a large service classification TAB, and supports configuration of more classifications, and its icons support configuration, as shown in FIG. 3 .
  • the interaction can be performed through peripheral devices, for example, the handle of the AR/VR terminal can operate the user interface of the AR/VR terminal, including the back button; the home button, and its long press can realize the reset function; the volume Addition and subtraction buttons; touch area, the touch area can realize the functions of clicking, sliding, pressing and dragging the focus.
  • the handle of the AR/VR terminal can operate the user interface of the AR/VR terminal, including the back button; the home button, and its long press can realize the reset function; the volume Addition and subtraction buttons; touch area, the touch area can realize the functions of clicking, sliding, pressing and dragging the focus.
  • the user can enter different scene interfaces through the global interface. For example, as shown in Figure 4 and Figure 5, the user can enter the play interface through the "play interface" entry in the global interface, or start by selecting any media asset in the global interface. Play interface.
  • the virtual reality device 500 can create a 3D scene through the Unity 3D engine, and render specific screen content in the 3D scene.
  • the virtual reality device 500 can display the operation UI content in the play interface.
  • a media asset list UI control may also be displayed in front of the display panel in the Unity 3D scene, and the media asset icon currently stored in the virtual reality device 500 may be displayed in the media asset list, or the display may be performed in the virtual reality device 500 Playing web media icon.
  • the user can select any icon in the media asset list, play the media asset data corresponding to the icon, and the selected media asset can be displayed in real time on the display panel.
  • the media assets that can be displayed in the Unity 3D scene can be in various forms such as pictures and videos, and, due to the display characteristics of the VR scene, the media assets displayed in the Unity 3D scene at least include 2D pictures or videos, 3D pictures or videos and Panoramic pictures or videos, etc.
  • the 2D picture or video is a traditional picture or video file. When displayed, the same image can be displayed on the two display screens of the virtual reality device 500.
  • the 2D picture or video is collectively referred to as a 2D film source.
  • 3D picture or video namely 3D film source, is made by at least two cameras shooting the same object at different angles, and can display different images in the two displays of the virtual reality device 500 to achieve stereoscopic Effect;
  • Panoramic pictures or videos, namely panoramic film sources are panoramic images obtained by panoramic cameras or special shooting methods.
  • the pictures can be displayed by creating a display sphere in the Unity 3D scene to present a panoramic effect.
  • each frame of image of the left-right type 3D film source includes two parts, left and right, which are images captured by the left-eye camera and the right-eye camera respectively.
  • panorama source it can be further divided into 360 panorama, 180 panorama and fisheye panorama according to the picture field of view of the panorama source.
  • the panoramic film source may also include a true panoramic type, a left-right type panoramic type, and an up-down type panoramic type.
  • the playback interface may also be provided with UI controls for playback control.
  • a UI control for playback control can be set in front of the display panel, and the UI control is a floating interactive UI control, that is, the display can be triggered according to a specific trigger action.
  • a "mode switch" option can be included in the UI control. When the user clicks the "mode switch” option, a mode list can be displayed, including mode options such as "2D mode" and "3D mode". After the user selects any mode option in the mode list, the user can control the virtual reality device to play the media asset data in the media asset list according to the selected mode.
  • the play interface can play the media asset item, that is, display the screen corresponding to the media asset item on the display panel.
  • the user can also switch the playback mode by calling the UI controls for playback control, and select any mode option in the UI control interface, and play the selected media according to the switched playback mode. capital data.
  • one media asset item corresponds to containing multiple data forms.
  • some media asset items include both 2D media asset data and 3D media asset data. That is, one media asset item corresponds to two media asset files, in which each frame of image in one media asset file contains only one specific picture, and each frame image in the other media asset file contains two specific pictures of left and right (or upper and lower) parts. .
  • a media asset file may be selected for playback according to different playback modes, so as to obtain different effects.
  • a virtual reality device 500 is provided, and the virtual reality device 500 may include a display and a controller.
  • the display is used to display the above-mentioned playing interface and other user interfaces.
  • the controller can automatically select the playback mode or the film source type by executing the media asset playback method, so as to alleviate the display errors that occur. Therefore, as shown in Figure 7, the controller can be configured to perform the following procedural steps:
  • S1 Receive a control instruction input by a user for playing media asset data.
  • the virtual reality device 500 may receive various control instructions input by the user in practical applications, and different control instructions may implement different functions and correspond to different interaction actions.
  • the control instruction for playing the media asset data may be an interactive action of the user selecting to open any picture or video file on the file management interface to complete the input.
  • the input is completed by the user selecting an interactive action of opening any picture or video connection in the media asset recommendation interface.
  • the user's control instruction for playing the media asset data can also be input through other interactive actions. For example, when the display shows the play interface, the user can complete the input by clicking on the media asset list in the play interface.
  • the input of control instructions may also be completed through the supported input modes.
  • the input can also be completed by inputting voices such as "open XX (media asset data name)", "I want to see XX”.
  • the virtual reality device 500 may extract key information from the control instruction, so as to automatically select the playback mode or film source type according to the control instruction.
  • the key information may include information related to user-specified content and information related to media asset data to be played.
  • the key information may also include the play mode specified by the user in the play interface.
  • the key information may also include the user clicking on the name of the media asset item during the interaction process and the detailed information of the media asset data under the item name, including information such as file description, format extension, and the like.
  • the key information can be used to automatically select the play mode or the type of source, so the content contained in the key information should be able to indicate the subsequent automatic play mode. For example, when the user selects a 3D source type media resource data to play, the key information should include the selected media resource item information and the current media resource item corresponding to the to-be-played source type is 3D source. In the process of switching the play mode by the user, the key information should include the currently playing media asset item information and the switched play mode.
  • the virtual reality device 500 may also use the key information to perform a matching query to obtain playback data that is compatible with the content specified in the key information.
  • the playback data includes a playback mode or a film source type. According to the different key information extracted from the control instruction, the content of the playback data obtained by the query is also different.
  • the playback data when the user selects a 3D source type of media asset data to play, and the source type is specified in the corresponding key information, the playback data includes the playback mode as 3D mode; when the user controls the playback mode of the virtual reality device 500 from 2D mode When switching to the 3D mode, if the playback mode is specified in the corresponding key information, the playback data includes that the source type is a 3D source.
  • a database may also be pre-built in the virtual reality device 500, and the database may include mapping relationships between various types of film sources and various playback modes.
  • the database may include all movie source types and all playing modes that the current virtual reality device 500 can support to play. The database can be called when the virtual reality device 500 enters the play interface, so as to perform the query process.
  • the database may also include related content for identifying the current play mode or source type.
  • related content for identifying the current play mode or source type.
  • the file format, file description information, etc. used to determine the type of the film source, and the playback program code, the playback mode tag, etc., used to determine the playback mode.
  • These contents may enable the virtual reality device 500 to accurately identify the current user intent in order to perform automatic selection.
  • the content of key information extracted by the virtual reality device 500 can be reduced, that is, the virtual reality device 500 can directly extract key information that is easier to obtain, and determine the source type or playback mode specified by the user according to the content.
  • S4 Control the display to display the media asset data in the playback interface according to the playback data.
  • the virtual reality device 500 may execute a playback program according to the corresponding playback data, so as to display a screen corresponding to the media asset data in the playback interface.
  • the virtual reality device 500 may control the corresponding media asset playback process according to the content specified in the control instruction and the content of the playback data obtained by the query.
  • the key information extracted in the control instruction is that the source type of the current media resource data is a left-right 3D source.
  • the playback data obtained by the query includes the playback mode suitable for the current 3D source. It is a left-right type 3D mode. Therefore, based on the control instruction and playback data, it is determined that the playback process is a left-right type 3D film source to perform playback. That is, for each frame of image in the media asset data, with the vertical axis as the boundary, the image is divided into two parts, the left-eye image and the right-eye image, where the left-eye image is sent to the rendering scene and visible to the left-eye camera. The right-eye image is sent to the display panel visible to the right-eye camera in the rendered scene for playback.
  • the virtual reality device 500 can extract key information from the control instruction after acquiring the control instruction input by the user, and use the key information to query the database for the appropriate playback parameters, so as to play the media according to the playback parameters. capital data.
  • the virtual reality device 500 can automatically complete the selection of the playback mode or the type of the film source, so that the media asset data to be played can be played in the most appropriate manner, and playback effects in multiple scenarios can be obtained.
  • the virtual reality device 500 can create a database and update the database in real time according to the currently displayed playback interface content, as shown in the figure As shown in 8, before the step of receiving the control instruction for playing the media asset data input by the user, it also includes:
  • the virtual reality device 500 may create a database in the initial control system for recording the mapping relationship between the play mode and the type of the film source.
  • the database may be created by the virtual reality device 500 through local data, or may be created by a service provider of the virtual reality device 500 through a cloud service.
  • the virtual reality device 500 can traverse the media asset items contained in the current play interface and extract all the movie source types corresponding to each media asset item, so as to ensure the current play interface.
  • the information corresponding to the media asset items contained in it can be queried in the database.
  • the current playback interface includes a media asset list, and the media asset list includes multiple media asset items, wherein each media asset item can be associated with multiple media asset files, for example, media asset A corresponds to two media asset files. , namely 2D media asset files and left and right 3D media asset files.
  • the virtual reality device 500 can set the corresponding playback mode for each type of extracted film source, that is, the playback mode corresponding to the 2D film source is 2D mode, and the playback mode of the left and right 3D film source is left and right 3D model.
  • a mapping relationship table is established according to the type of the film source and the playing mode, wherein the mapping relationship table includes the addresses of the media resource files and the playing mode of a plurality of types of the film source.
  • all source types and their corresponding playback modes in the current interface can be determined and added to the database in the form of a mapping relationship table.
  • This database maintenance process can make the database contain the mapping relationship corresponding to the media asset items in the current interface, so that the source type or playback mode can be quickly queried in the subsequent query process.
  • the one-to-many mapping relationship means that from the perspective of picture/video resources, a picture/video has multiple playback modes, that is, one-to-many; The mode corresponds to a picture/video resource, that is, many-to-one.
  • a database can be created through the mapping framework. Frameworks that can create databases include open source object-relational mapping framework (Hibernate), Java Database Connectivity (JDBC), and persistence layer framework (MyBatis).
  • the MyBatis framework has the interface binding function, that is, including annotation binding Structured Query Language (SQL) and Extensible Markup Language binding SQL.
  • the MyBatis framework supports the Object Graph Navigation Language (OGNL) expression dynamic structured query language. Therefore, the MyBatis framework can flexibly configure the SQL statement to be run through XML or annotation, map the java object and the SQL statement to generate the final executed SQL, and finally remap the result of the SQL execution to generate the java object.
  • the learning threshold of the Mybatis framework is low, the database maintainer can directly write the original ecological SQL, and the SQL execution performance can be strictly controlled, and the flexibility is high.
  • the MyBatis framework can be selected to create and maintain a database. That is, as shown in Figure 9 and Figure 10, the step of storing the mapping relationship table in the database also includes:
  • the MyBatis framework mapping file that is, the mapping relationship table
  • the one-to-many/many-to-one mapping relationship between media asset items and playback modes can be quickly established through the MyBatis framework and the mapping relationship table, and the database based on the MyBatis framework can not only reduce the workload of developers and Learning content, and it is convenient to uniformly maintain media asset items in different playback interfaces, so that the virtual reality device 500 can quickly query the appropriate playback parameters through the database.
  • the virtual reality device 500 in order to be able to extract key information from the control instruction, can obtain the media asset data to be played by parsing the control instruction; and extract keywords from the media asset data to be played, so as to query the playback data During the process, the source type or playback mode can be queried based on keywords.
  • the virtual reality device 500 can query the playback data based on the type of the movie source when using the key information to query the playback data, it can also query the type of the movie source based on the playback mode, so the keyword includes the source type of the media resource data to be played or the user The specified playback mode.
  • the virtual reality device 500 may extract the format and file description information of the media asset file to obtain keywords during the process of switching to the playback interface.
  • the specified information of the user that is, the source type of the media resource file can be determined.
  • the step of extracting keywords from the media asset data to be played also includes:
  • the virtual reality device 500 may extract the media asset data to be played obtained by analysis during the process of extracting key information, and extract multiple frames of image data therefrom.
  • the multiple frames of image data may be multiple image frames at equal time difference intervals, that is, there are significant differences between the images of the multiple frames of image data, so that the subsequent image recognition process is more accurate.
  • the multiple frames of image data may be sequentially input into the image recognition model, so as to determine the screen arrangement in the image data through the image recognition model.
  • the image recognition model may be a classification model obtained by training sample data according to a machine learning algorithm. That is, after the initial model is established, a sample image with label information is input into the model, and the classification result output by the model is obtained. Combined with the difference between the classification results and the label information, backpropagation is used to adjust the model parameters. After multiple inputs of a large number of samples, a classification model with a certain classification accuracy can be obtained.
  • Image recognition models can also be built by encapsulating image processing algorithms. That is, in an application, a series of image recognition programs can be developed, and image recognition can be performed on the input image data through these image recognition programs, so as to determine the screen arrangement in the image. For example, the image recognition program can segment the image data to obtain at least two parts, and then calculate the similarity value of the two parts through the image similarity algorithm. When the similarity value is greater than the set threshold, the current media asset is determined.
  • the data are 3D images.
  • the virtual reality device 500 can obtain the source type of the media to be played, that is, obtain key information. It can be seen that such an acquisition method of key information can alleviate the influence of file format, description information and other contents on key information, so that the virtual reality device 500 can be applied to the process of identifying the source type of most media resource files, and it can improve the automatic selection of playback mode. 's accuracy.
  • the key information extracted from the control instruction is different when the user specifies different contents, the corresponding playback data queried by using the key information is also different. Therefore, as shown in Figure 12, use the key information to query the playback data in the database
  • the steps also include:
  • the virtual reality device 500 may extract user-specified information from the control instruction, that is, extract information explicitly selected by the user when performing an interactive action. For example, when the user selects any left and right type 3D media asset item to play, the user designation information is that the source type is the left and right type 3D source. And when the user selects the option "mode switching-3D mode (left and right)" in sequence on the playback interface, the user specifies that the playback mode is the left and right 3D mode.
  • the virtual reality device 500 may query different playback data from the database. That is, if the user-specified information is the specified source type, the target playback mode is queried in the database; if the user-specified information is the specified playback mode, the target source type is queried in the database.
  • the target playback mode is a playback mode compatible with the specified source type
  • the target source type is a source type compatible with the specified playback mode.
  • the playback data queried in the database indicates that the appropriate playing mode is the left-right 3D mode.
  • the playback mode is the left-right 3D mode
  • the address of the left-right 3D source corresponding to the current media asset item is queried in the database, and the playback is called later.
  • the virtual reality device 500 may execute a program for playing the media asset data.
  • all relevant playback data should be clarified before playing the media asset data, such as the media asset item to be played, and the source type, playback mode, and source address related to the media asset item to be played.
  • the virtual reality device 500 can first call the media resource data of the target movie source type, and parse the media resource data according to the specified playback mode, that is, the virtual reality device 500 can retrieve the media resource data from the movie source.
  • the address obtains the media asset data and performs operations such as decoding to obtain a video data stream or a specific picture picture corresponding to the media asset data.
  • the parsed media asset data In order to enable the display to display specific screen content, after parsing the media asset data, the parsed media asset data also needs to be sent to the virtual rendering scene, so as to display the specific media asset screen content through the display panel in the rendering scene.
  • different screen display methods can be used in the rendering scene. For example, for left and right 3D source media, the left part and the right part of each frame of image can be sent separately.
  • the display panel visible to the left-eye camera and the display panel visible to the right-eye camera are used to output media images to the left and right displays respectively to form a 3D effect.
  • the virtual reality device 500 may call the media asset data in different ways according to different media asset storage forms, that is, as shown in FIG. 13 , in some embodiments, calling the media asset data
  • the step of media asset data of the target film source type further includes:
  • S414 Access the media resource address to obtain the media resource data of the target slice source type.
  • the virtual reality device 500 may detect the storage form of the media asset data through information such as file size and file location of the media asset data. Usually, when the user jumps from the file management interface to the playback interface, the media asset data being played is stored locally; and when the user jumps from the media asset recommendation interface to the playback interface, the played media asset data is stored in local storage.
  • the media asset data storage form is network storage.
  • the media resource data of the target slice source type is called in the local storage; if the storage form is network storage, the media resource data of the target slice source type can be extracted first. resource address, and then access the media resource address to obtain the media asset data corresponding to the source type.
  • the media asset data corresponding to different resource types may be partially stored locally and partially stored in the network.
  • the media asset item A it correspondingly includes two source formats, ie, a 2D format and a 3D format.
  • the 2D type of media asset data is stored in the local storage
  • the 3D type of media asset data is stored in the network storage.
  • the specified video source type in the playback data is 3D video source
  • the 2D video source of the media resource item is stored in the local memory, it is still necessary to access the media resource address corresponding to the 3D video source to obtain the media resource. data.
  • a mapping relationship may be established between the media asset list and the media asset data address in the current play interface, and this mapping relationship may enable the user to click on the media asset list
  • the virtual reality device 500 can first extract the media asset list displayed in the current playback interface, then traverse the media asset data addresses of each media asset item in the media asset list, and compare the media asset data addresses of each media asset item and the target source type. A mapping relationship is established between them, so that when the user selects any media asset item in the media asset list, the media asset data of the target source type can be obtained.
  • the virtual reality device 500 obtains the screen content corresponding to the above interface by rendering the scene.
  • the rendering scene refers to a virtual scene constructed by the rendering engine of the virtual reality device 500 through a rendering program.
  • the virtual reality device 500 based on the unity 3D rendering engine can construct a unity 3D scene when presenting the display screen.
  • the Unity 3D scene various virtual objects and functional controls can be added to render a specific usage scene.
  • the virtual reality device 500 may also set a virtual camera in the unity 3D scene.
  • the virtual reality device 500 can set a left-eye camera and a right-eye camera in the unity 3D scene according to the positional relationship of the user's eyes.
  • the displays output the rendered images respectively.
  • the angles of the two virtual cameras in the unity 3D scene can be adjusted in real time with the pose sensor of the virtual reality device 500, so that when the user wears the virtual reality device 500 and moves, different viewing angles can be output in real time Rendering of the Unity 3D scene in perspective.
  • the picture outputted by the rendered scene can be sent to the display for display.
  • the optical components can be used to increase the optical distance of the display image to the user's eyes, so that the user can clearly see the content of the image displayed on the display.
  • the optical component can be composed of multiple convex lenses, and the light emitted by the display is refracted by the multiple convex lenses, so that the user can clearly see the displayed picture content and obtain an immersive experience.
  • a display area can be set, and the pictures in the display area can be displayed normally, and the pictures outside the display area can be hidden, so as to alleviate the influence of distortion on the edge pictures.
  • the virtual reality device 500 can set an initial display area within a rectangular area of 3840 ⁇ 2160 for the display size based on the screen content obtained from the rendered scene, and the display area can be determined according to the statistical result of the experience effect, which has a certain empirical value. . Then, according to the shape of the set display area, the screen outside the display area is hidden, that is, in the display, the screen content in the central area is visible, and the edge area is displayed as a pure black pattern.
  • the system default display area of the virtual reality device 500 when outputting the image screen is the initial display area, and when the display is performed according to the initial display area, it can meet the requirements of most users. It is not suitable for viewing the display images according to the initial display area, which causes the images viewed by these users to be unclear and reduces the user experience. Therefore, in order to adapt to the facial features of different users, a virtual reality device and a VR screen display method are provided in some embodiments of the present application.
  • the virtual reality device 500 includes a display and a controller, the display is used to display various user interfaces; the controller is used to process various usage parameters and interaction data of the virtual reality device 500 to control the presentation of different screen contents on the display , and as shown in FIG. 15 , the controller may be further configured to execute the VR screen display method, including:
  • S6 Receive a control instruction input by the user for setting the display area.
  • various control instructions may be input as required, so as to realize the interaction between the user and the virtual reality device 500 .
  • the virtual reality device 500 may run the relevant program for setting the display area according to the VR screen display program preset in the operating system.
  • the control instructions for setting the display area may be input in different ways according to the hardware configuration and operating system of the virtual reality device 500 .
  • the virtual reality device 500 may default that the user has input a control instruction for setting the display area.
  • a program related to setting the display area can be started through a specific portal.
  • an adjustment display area control can be set in the status bar of the setting interface or other UI interface. The user can click the adjustment display area control during use to run the control command for setting the display area, that is, through the interface UI interactive input for Set the control command for the display area.
  • the user can also input a control instruction for setting the display area through a specific interactive action of the shortcut key.
  • the virtual reality device 500 may set a shortcut key policy according to the key conditions carried by itself. For example, by clicking the power button of the virtual reality device 500 three times in a row, a program related to setting the display area is triggered. That is, the control command for setting the display area is the shortcut key command of clicking the power button three times in a row.
  • the user may also complete the input of control instructions by means of other interactive devices or interactive systems.
  • an intelligent voice system may be built in the virtual reality device 500, and the user may input voice information, such as "set the display area", "I can't see the screen clearly", etc. through an audio input device such as a microphone.
  • the intelligent voice system recognizes the meaning of the voice information by transforming, analyzing, and processing the user's voice information, and generates control instructions according to the recognition results.
  • a user-defined area can be specified in the control command.
  • the virtual reality device 500 may extract the custom area designated by the user from the control instruction. For example, when the user inputs a control instruction for setting the display area, an input interface can be displayed on the display, and the user can click on the four vertices of the display area to be set in the input interface, so as to generate a self-portrait according to the coordinates of the four vertices. Define the area.
  • the display input interface can include a text box or a numerical scroll bar, and the user can input the binocular distance (pupillary distance) through the text box.
  • Numerical custom matches a custom field.
  • the user can directly input the width and height of the custom area, and the custom area is determined by the input width and height.
  • a plurality of options may also be preset in the virtual reality device 500, each option corresponds to a custom area, and when the user inputs a control instruction for setting the display area, the user may select the appropriate option from the set options custom area.
  • an area selection interface may be displayed, and the area selection interface may include multiple options set according to conditions such as age, gender, and region, and each option corresponds to the A custom area under conditions.
  • S8 Perform scaling processing on the to-be-displayed image according to the custom area to obtain a custom area image.
  • the virtual reality device 500 may perform zoom processing on the to-be-displayed screen to obtain the custom area image.
  • the to-be-displayed image can be enlarged; when the user-specified custom area is smaller than the initial display area, the to-be-displayed image can be reduced.
  • the user-defined area usually does not have a fixed aspect ratio
  • the virtual reality device 500 may send the processed screen image to the display for display. Since the image received in the display has been scaled, the final image displayed on the display can be displayed according to the user-defined area, that is, the displayed image conforms to the current user's facial features, enabling the user to watch a clear image. And reduce the uncomfortable viewing experience during the viewing process.
  • the controller in order to obtain a custom area, in the step of receiving a user-inputted control instruction for setting a display area, the controller is further configured to:
  • S620 Calculate a width value according to the abscissa value in the vertex coordinates, and calculate a height value according to the ordinate value in the vertex coordinates;
  • S630 Generate a custom area according to the width value and the height value.
  • the virtual reality device 500 may control the display to sequentially display an input interface for prompting input of the coordinates of each vertex, and record the vertex coordinates input by the user in the input interface.
  • the vertex coordinates include the coordinates of the upper left corner, the lower left corner, the lower right corner and the upper right corner.
  • the virtual reality device 500 will start the application for adjusting the display area. After the application is launched, the grid diagram shown in Figure 17 will be displayed to facilitate the user to select an appropriate comfort zone.
  • the virtual reality device 500 may first prompt the user to click the comfortable point in the upper left corner, and record the coordinate information (X1, Y1) of the point; then prompt the user to click the comfortable point in the upper right corner, and Record the coordinate information of the point (X2, Y2); then prompt the user to click on the comfortable point in the lower left corner and record the coordinate information (X3, Y3) of the point; finally prompt the user to click the comfortable point in the lower right corner to record the point The coordinate information (X4, Y4). Through four prompts, the user can be guided to complete the input, and the coordinate information of the appropriate area can be obtained according to the user input, as shown in FIG. 18 .
  • the width and height of the custom area can be calculated according to the vertex coordinates, that is, the width value is calculated according to the abscissa value in the vertex coordinates, and the height value is calculated according to the ordinate value in the vertex coordinates.
  • width and height values can be used as the width and height values that are finally used to generate the custom area.
  • the specific combination of width and height may be determined according to the input method of the vertex coordinates, or determined according to the input result of the vertex coordinates.
  • the top width and bottom width can also be calculated respectively, as well as the left and right heights.
  • the difference between the upper left corner coordinate and the abscissa of the upper right corner coordinate may be calculated to obtain the first width value, namely W1
  • the difference between the abscissa of the lower left corner coordinate and the lower right corner coordinate may be calculated to obtain the second width value.
  • the width value which is W2.
  • the step of generating the custom area further includes:
  • the input error value includes a width error value calculated according to the abscissa and a height error value calculated according to the ordinate.
  • the virtual reality device 500 may generate the width error value by calculating the difference between W1 and W2.
  • the virtual reality device 500 can also generate a height error value according to the first height value and the second height value, that is, calculate the error between H1 and H2.
  • the virtual reality device 500 can customize the display area by displaying an input interface and prompting the user to input points that are suitable for the user in sequence.
  • the width and height of the custom area are determined by detecting the vertex coordinates input by the user, so that it is convenient for the user to input a custom area that conforms to the specification of the display area.
  • the virtual reality device 500 can determine the size of the custom area according to the calculated width and height information, and in order to finally determine the display area, the position of the custom area needs to be set, Therefore, in some embodiments, according to the width value and the height value, the step of generating a custom area further includes: calculating center point coordinates according to the vertex coordinates; and generating a custom area based on the center point coordinates .
  • the coordinates of the center point can be obtained by calculating the abscissa value and the ordinate value of the vertex coordinates, respectively.
  • the virtual reality device 500 can use the coordinates of the center point as a reference to generate a custom area, that is, the center of the custom area is located on the coordinates of the center point, so that more original video images can be retained in the custom area main content in .
  • the virtual reality device 500 After extracting the user-defined area, the virtual reality device 500 performs zoom processing on the to-be-displayed screen to obtain an image of the user-defined area.
  • the step of performing zoom processing on the to-be-displayed picture further includes:
  • S830 According to the scaling ratio, perform scaling processing on the to-be-displayed picture.
  • the virtual reality device 500 may extract the initial display area of the picture to be displayed, that is, the system default display area.
  • the initial display area is compared with the user-defined area, so as to generate a zoom ratio according to the comparison result between the initial display area and the user-defined area.
  • the scaling ratio may be generated according to the comparison result of parameters such as the width, height, and area of the initial display area and the user-defined area, and is used to perform scaling processing on the to-be-displayed screen according to the scaling ratio.
  • the width and height need to use the same scaling ratio, and then compare the first ratio value and the second ratio value to determine the first ratio value and the second ratio value for execution.
  • the scaling value In order to ensure that all content is displayed in the comfortable area set by the user, the smaller value of Ratio1 and Ratio2 is taken as the scaling ratio Ratio. That is, if the first ratio value Ratio1 is greater than or equal to the second ratio value Ratio2, the second ratio value Ratio2 is determined as the zoom ratio; if the first ratio value Ratio1 is smaller than the second ratio value Ratio2, the first ratio value Ratio1 is determined as the zoom ratio.
  • the virtual reality device 500 can support the user to manually set the display area according to the viewing comfort, so as to achieve the best viewing effect. And by comparing the initial display area and the custom area, a scaling ratio is generated, so that scaling processing is performed through the scaling ratio, so as to ensure that the width and height ratio of the VR application display content remains unchanged.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种虚拟现实设备及媒资播放方法,所述媒资播放方法可以在接收到用户的控制指令后,从控制指令中提取关键信息,并使用关键信息在数据库中查询播放数据,从而按照播放数据显示媒资数据。

Description

一种虚拟现实设备及媒资播放方法
相关申请的交叉引用
本申请要求在2021年3月16日提交中国专利局、申请号为202110280647.4、发明名称为“一种虚拟现实设备及媒资播放方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实设备及媒资播放方法。
背景技术
虚拟现实(Virtual Reality,VR)技术是通过计算机模拟虚拟环境,从而给人以环境沉浸感的显示技术。虚拟现实设备是一种应用虚拟显示技术为用户呈现虚拟画面的设备。通常,虚拟现实设备包括两个用于呈现虚拟画面内容的显示屏幕,分别对应于用户的左右眼。当两个显示屏幕所显示的内容分别来自于同一个物体不同视角的图像时,可以为用户带来立体的观影感受。
虚拟现实设备可以播放多种片源类型的多媒体资源,例如,2D片源、3D片源以及全景片源等。不同的片源类型需要不同的播放模式,即2D模式、3D模式以及全景模式等。用户在使用虚拟现实设备时,可以选择相适应的模式,使虚拟现实设备可以在播放界面中展示对应的媒资画面内容。
发明内容
本申请提供了一种虚拟现实设备及媒资播放方法,以解决传统播放方法无法自动选择播放模式和片源类型的问题。
一方面,本申请提供的虚拟现实设备,包括:显示器和控制器,其中所述显示器被配置为显示播放界面及其他用户界面;所述控制器被配置为执行以下程序步骤:
接收用户输入的用于播放媒资数据的控制指令;
响应于所述控制指令,从所述控制指令中提取关键信息;
使用所述关键信息在数据库中查询播放数据,所述播放数据包括播放模式和/或片源类型,所述数据库包括多种片源类型和多种播放模式之间的映射关系;
按照所述播放数据,控制所述显示器在所述播放界面中显示所述媒资数据。
第二方面,本申请提供的虚拟现实设备,包括:显示器;控制器,被配置为执行以下的程序步骤:
接收用户输入的用于设定显示区域的控制指令;
响应于所述控制指令,提取所述控制指令中指定的自定义区域;
按照所述自定义区域,对待显示画面执行缩放处理,以获得自定义区域图像;
控制所述显示器显示所述自定义区域图像。
第三方面,本申请还提供的媒资播放方法,应用于上述虚拟现实设备,所述媒资播放方法包括以下步骤:
接收用户输入的用于播放媒资数据的控制指令;
响应于所述控制指令,从所述控制指令中提取关键信息;
使用所述关键信息在数据库中查询播放数据,所述播放数据包括播放模式和/或片源类型,所述数据库包括多种片源类型和多种播放模式之间的映射关系;
按照所述播放数据,控制所述显示器在所述播放界面中显示所述媒资数据。
附图说明
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中包括虚拟现实设备的显示系统结构示意图;
图2为本申请实施例中VR场景全局界面示意图;
图3为本申请实施例中全局界面的推荐内容区域示意图;
图4为本申请实施例中播放界面示意图;
图5为本申请实施例中播放界面区域划分示意图;
图6为本申请实施例中模式切换操作界面示意图;
图7为本申请实施例中媒资播放方法的流程示意图;
图8为本申请实施例中维护数据库流程示意图;
图9为本申请实施例中创建数据库流程示意图;
图10为本申请实施例中采用MyBatis框架创建数据库的流程示意图;
图11为本申请实施例中识别媒资片源类型的流程示意图;
图12为本申请实施例中查询播放数据的流程示意图;
图13为本申请实施例中调用媒资数据的流程示意图;
图14为本申请实施例中渲染场景示意图;
图15为本申请实施例中VR画面显示方法流程示意图;
图16为本申请实施例中生成自定义区域流程示意图;
图17为本申请实施例中输入界面示意图;
图18为本申请实施例中输入顶点坐标的效果示意图;
图19为本申请实施例中判断输入误差值是否超出误差值范围的流程示意图;
图20为本申请实施例中对待显示画面执行缩放处理的流程示意图。
具体实施方式
为使本申请示例性实施例的目的、技术方案和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施例中的技术方案进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
基于本申请中示出的示例性实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。此外,虽然本申请中公开内容按照示范性一个或几个实例来介绍,但应理解,可以就这些公开内容的各个方面也可以单独构成一个完整技术方案。
应当理解,本申请中说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,例如能够根据本申请实施例图示或描述中给出那 些以外的顺序实施。
此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖但不排他的包含,例如,包含了一系列组件的产品或设备不必限于清楚地列出的那些组件,而是可包括没有清楚地列出的或对于这些产品或设备固有的其它组件。
本申请中使用的术语“模块”,是指任何已知或后来开发的硬件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。
本说明书通篇提及的“多个实施例”、“一些实施例”、“一个实施例”或“实施例”等,意味着结合该实施例描述的具体特征、结构或特性包括在至少一个实施例中。因此,本说明书通篇出现的短语“在多个实施例中”、“在一些实施例中”、“在至少另一个实施例中”或“在实施例中”等并不一定都指相同的实施例。此外,在一个或多个实施例中,具体特征、结构或特性可以任何合适的方式进行组合。因此,在无限制的情形下,结合一个实施例示出或描述的具体特征、结构或特性可全部或部分地与一个或多个其他实施例的特征、结构或特性进行组合。这种修改和变型旨在包括在本申请的范围之内。
本申请实施例中,虚拟现实设备500泛指能够佩戴于用户面部,为用户提供沉浸感体验的显示设备,包括但不限于VR眼镜、增强现实设备(Augmented Reality,AR)、VR游戏设备、移动计算设备以及其它可穿戴式计算机等。本申请部分实施例以VR眼镜为例对技术方案进行阐述,应当理解的是,所提供的技术方案同时可应用于其他类型的虚拟现实设备。所述虚拟现实设备500可以独立运行,或者作为外接设备接入其他智能显示设备,其中,所述显示设备可以是智能电视、计算机、平板电脑、服务器等。
虚拟现实设备500可以在佩戴于用户面部后,显示媒资画面,为用户双眼提供近距离影像,以带来沉浸感体验。为了呈现媒资画面,虚拟现实设备500可以包括多个用于显示画面和面部佩戴的部件。以VR眼镜为例,虚拟现实设备500可以包括但不限于外壳、位置固定件、光学系统、显示组件、姿态检测电路、接口电路等部件中的至少一种。实际应用中,光学系统、显示组件、姿态检测电路以及接口电路可以设置于外壳内,以用于呈现具体的显示画面;外壳两侧连接位置固定件,以佩戴于用户头部。
在使用时,姿态检测电路中内置有重力加速度传感、陀螺仪等姿态检测元件,当用户头部移动或转动时,可以检测到用户的姿态,并将检测到的姿态数据传递给控制器等处理元件,使处理元件可以根据检测到的姿态数据调整显示组件中的具体画面内容。
在一些实施例中,如图1所示的虚拟现实设备500可以与服务器400之间构建一个基于网络的显示系统,在虚拟现实设备500以及服务器400之间可以实时进行数据交互。
在一些实施例中,用户还可以使用显示设备200、移动终端300,和遥控器100,还可以直接与虚拟现实设备500进行交互,例如,可以将移动终端300和遥控器100作为虚拟现实场景中的手柄进行使用,以实现体感交互等功能。
虚拟现实设备500的显示组件包括显示屏幕以及与显示屏幕有关的驱动电路。为了呈现具体画面,以及带来立体效果,显示组件中可以包括两个显示屏幕,分别对应于用户的左眼和右眼。在呈现3D效果时,左右两个屏幕中显示的画面内容会稍有不 同,可以分别显示3D片源在拍摄过程中的左相机和右相机。由于用户左右眼观察到的画面内容,因此在佩戴时,可以观察到立体感较强的显示画面。
虚拟现实设备500中的光学系统,是由多个透镜组成的光学模组。光学系统设置在用户的双眼与显示屏幕之间,可以通过透镜对光信号的折射以及透镜上偏振片的偏振效应,增加光程,使显示组件呈现的内容可以清晰的呈现在用户的视野范围内。同时,为了适应不同用户的视力情况,光学系统还支持调焦,即通过调焦组件调整多个透镜中的一个或多个的位置,改变多个透镜之间的相互距离,从而改变光程,调整画面清晰度。
虚拟现实设备500的接口电路可以用于传递交互数据,除上述传递姿态数据和显示内容数据外,在实际应用中,虚拟现实设备500还可以通过接口电路连接其他显示设备或外设,以通过和连接设备之间进行数据交互,实现更为复杂的功能。例如,虚拟现实设备500可以通过接口电路连接显示设备,从而将所显示的画面实时输出至显示设备进行显示。又例如,虚拟现实设备500还可以通过接口电路连接手柄,手柄可以由用户手持操作,从而在VR用户界面中执行相关操作。
其中,所述VR用户界面可以根据用户操作呈现为多种不同类型的UI布局。例如,用户界面可以包括全局界面,AR/VR终端启动后的全局UI如图2所示,所述全局UI可显示于AR/VR终端的显示屏幕中,也可显示于所述显示设备的显示器中。全局UI可以包括推荐内容区域1、业务分类扩展区域2、应用快捷操作入口区域3以及悬浮物区域4。
推荐内容区域1用于配置不同分类TAB栏目;在所述栏目中可以选择配置媒资、专题等;所述媒资可包括2D影视、教育课程、旅游、3D、360度全景、直播、4K影视、程序应用、游戏、旅游等具有媒资内容的业务,并且所述栏目可以选择不同的模板样式、可支持媒资和专题同时推荐编排,如图3所示。
状态栏用于使用户能够执行常用的控制操作,实现快速对虚拟现实设备500进行设置。由于对虚拟现实设备500的设置程序包括诸多项,因此在状态栏中通常不能将所有常用设置选项全部显示。为此,在一些实施例中,状态栏中还可以设置有扩展选项。扩展选项被选中后,可以在当前界面中呈现扩展窗口,在扩展窗口中可以进一步设置有多个设置选项,用于实现虚拟现实设备500的其他功能。
例如,在一些实施例中,扩展选项被选中后,可以在扩展窗口中设置“快捷中心”选项。用户在点击快捷中心选项后,虚拟现实设备500可以显示快捷中心窗口。快捷中心窗口中可以包括“截屏”、“录屏”以及“投屏”选项,用于分别唤醒相应的功能。
业务分类扩展区域2支持配置不同分类的扩展分类。如果有新的业务类型时,支持配置独立TAB,展示对应的页面内容。业务分类扩展区域2中的扩展分类,也可以对其进行排序调整及下线业务操作。在一些实施例中,业务分类扩展区域2可包括的内容:影视、教育、旅游、应用、我的。在一些实施例中,业务分类扩展区域2被配置为可展示大业务类别TAB,且支持配置更多的分类,其图标支持配置,如图3所示。
在一些实施例中,可以通过外设执行交互,例如AR/VR终端的手柄可对AR/VR终端的用户界面进行操作,包括返回按钮;主页键,且其长按可实现重置功能;音量加减按钮;触摸区域,所述触摸区域可实现焦点的点击、滑动、按住拖拽功能。
用户可以通过全局界面进入不同的场景界面,例如,如图4、图5所示,用户可 以在全局界面中的“播放界面”入口进入播放界面,或者通过在全局界面中选择任一媒资启动播放界面。在播放界面中,虚拟现实设备500可以通过Unity 3D引擎创建3D场景,并在3D场景中渲染具体的画面内容。
在播放界面中,用户可以观看具体媒资内容,为了获得更好的观影体验,播放界面中还可以设置不同的虚拟场景控件,以配合媒资内容呈现具体场景或实时交互。例如,在播放界面中,可以在Unity 3D场景中加载面板控件,用来呈现图片内容,再配合其他家居虚拟控件,以实现影院银幕的效果。
虚拟现实设备500可以在播放界面中展示操作UI内容。例如,在Unity 3D场景中的显示面板前方还可以显示有媒资列表UI控件,在媒资列表中可以显示当前虚拟现实设备500本地存储的媒资图标,或者显示可在虚拟现实设备500中进行播放的网络媒资图标。用户可以在媒资列表中选中任一图标,播放该图标对应的媒资数据,则在显示面板中可以实时显示被选中的媒资。
能够在Unity 3D场景中显示的媒资可以是图片、视频等多种形式,并且,由于VR场景的显示特点,在Unity 3D场景中显示的媒资至少包括2D图片或视频、3D图片或视频以及全景图片或视频等。
其中,2D图片或视频是一种传统图片或视频文件,当进行显示时,可以在虚拟现实设备500的两个显示屏幕中显示相同的图像,本申请中将2D图片或视频统称为2D片源;3D图片或视频,即3D片源,是一种由至少两个相机在不同角度对同一个物体进行拍摄制作而成,可以在虚拟现实设备500的两个显示器中显示不同的图像,实现立体效果;全景图片或视频,即全景片源,是通过全景相机或者特殊的拍摄手段获得的全景影像,可以通过在Unity 3D场景中创建显示球面的方式,将图片进行展示,以呈现全景效果。
对于3D片源,还可以按照片源中每帧图像的画面排布方式,进一步划分为左右型3D片源和上下型3D片源等。其中,左右型3D片源的每帧图像中,包括左右两个部分,分别为左眼相机和右眼相机拍摄的图像画面。而对于全景片源,还可以按照全景片源的画面视野进一步分为360全景、180全景以及鱼眼全景等片源形式,不同的全景片源形式中每帧图像的画面合成方式不同。为了呈现更好的立体效果,全景片源还可以包括真全景型、左右型全景、上下型全景等形式。
由于播放界面中可以显示的媒资数据包括多种片源类型,并且不同的片源类型需要不同的图像输出方式,因此在播放界面中,还可以设有用于播放控制的UI控件。例如,用于播放控制的UI控件可以设置在显示面板前方,该UI控件是一种浮动交互UI控件,即可以按照特定触发动作触发显示。如图6所示,在UI控件中,可以包括“模式切换”选项,当用户点击“模式切换”选项后,可以显示模式列表,包括“2D模式”、“3D模式”等模式选项。用户在模式列表选中任一模式选项后,则可以控制虚拟现实设备按照选定的模式播放媒资列表中的媒资数据。
同理,当用户选中列表中某一媒资项目后,播放界面可以对该媒资项目进行播放,即在显示面板上展示该媒资项目对应的画面。而在播放媒资项目的过程中,用户还可以通过调用用于播放控制的UI控件,并在UI控制界面中选中任一模式选项,切换播放模式,并按照切换后的播放模式播放选中的媒资数据。
为了适应不同的播放模式,在一些实施例中,一个媒资项目对应包含多种形式的 数据形式。例如,部分媒资项目既包含2D形式的媒资数据,又包含3D形式的媒资数据。即一个媒资项目对应两个媒资文件,其中一个媒资文件中每一帧图像仅包含一个具体画面,另一个媒资文件中每一帧图像包含左右(或上下)两个部分的具体画面。对于这种媒资项目,在播放过程中,可以根据不同播放模式,对应选择一个媒资文件进行播放,以获得不同的效果。
可见,使用虚拟现实设备500在进行媒资播放的过程中,多个片源类型对应多个播放模式,很容易在播放过程中出现显示错误。为此,在本申请的部分实施例中,提供一种虚拟现实设备500,所述虚拟现实设备500可包括显示器和控制器。其中,显示器用于显示上述播放界面以及其他用户界面。控制器可以通过执行媒资播放方法实现自动选择播放模式或者片源类型,缓解出现的显示错误。因此,如图7所示,控制器可以被配置为执行以下程序步骤:
S1:接收用户输入的用于播放媒资数据的控制指令。
虚拟现实设备500可以在实际应用中接收用户输入的各种控制指令,不同的控制指令可以实现不同的功能,对应不同的交互动作。例如,对于用于播放媒资数据的控制指令可以是用户在文件管理界面选择打开任一图片或视频文件的交互动作完成输入。或者是由用户在媒资推荐界面中选择打开任一图片或视频连接的交互动作完成输入。
用户播放媒资数据的控制指令还可以通过其他交互动作完成输入。例如,用户可以在显示器显示播放界面时,通过点击播放界面中的媒资列表完成输入。而对于支持其他交互方式的虚拟现实设备500,还可以通过所支持的输入方式完成控制指令的输入。例如,对于支持智能语音系统的虚拟现实设备500,还可以通过输入诸如“打开××(媒资数据名称)”、“我想看××”的语音完成输入。
S2:响应于所述控制指令,从所述控制指令中提取关键信息。
在接收到用户输入的播放控制指令后,虚拟现实设备500可以从控制指令中提取关键信息,以根据控制指令自动选择播放模式或者片源类型。其中,关键信息可以包括与用户指定内容相关的信息,以及与待播放媒资数据相关的信息。例如,关键信息还可以包括用户在播放界面中指定的播放模式。关键信息还可以包括用户在交互过程中点击媒资项目名称以及该项目名称下媒资数据的详细信息,包括文件描述、格式扩展名等信息。
关键信息可用于后续自动选择播放模式或者片源类型,因此关键信息所包含的内容应能够指示后续自动播放模式。例如,用户在选定一个3D片源类型的媒资数据进行播放过程中,关键信息应包括选中的媒资项目信息以及当前媒资项目对应要播放的片源类型为3D片源。而用户在切换播放模式的过程中,关键信息应包括当前播放的媒资项目信息以及切换后的播放模式。
S3:使用所述关键信息在数据库中查询播放数据。
在提取到关键信息以后,虚拟现实设备500还可以使用关键信息进行匹配查询,以获得与关键信息中指定内容相适应的播放数据。其中,所述播放数据包括播放模式或片源类型。根据在控制指令中提取的关键信息不同,查询获得的播放数据内容也不同。例如,当用户选择一个3D片源类型的媒资数据进行播放,相应关键信息中指定了片源类型,则播放数据包括播放模式为3D模式;当用户控制虚拟现实设备500的播放模式从2D模式切换至3D模式时,相应关键信息中指定了播放模式,则播放数据 包括片源类型为3D片源。
为了便于查询播放数据,虚拟现实设备500中还可以预先构建一个数据库,所述数据库中可以包括多种片源类型和多种播放模式之间的映射关系。数据库中可以包括当前虚拟现实设备500所能够支持播放的全部片源类型以及全部播放模式。数据库可以在虚拟现实设备500进入播放界面时调用,以便执行查询过程。
在一些实施例中,数据库还可以包括用于识别当前播放模式或者片源类型的相关内容。例如,用于确定片源类型的文件格式、文件描述信息等,以及用于确定播放模式的播放程序代码、播放模式标签等。这些内容可以使虚拟现实设备500能够准确识别当前的用户意图,以便执行自动选择。并且,通过这些内容,可以使虚拟现实设备500提取的关键信息内容减少,即虚拟现实设备500可以直接提取较容易获得的关键信息,并依据这些内容确定用户指定的片源类型或者播放模式。
S4:按照所述播放数据,控制所述显示器在所述播放界面中显示所述媒资数据。
在查询到与关键信息相适应的播放数据后,虚拟现实设备500可以按照对应的播放数据执行播放程序,以便在播放界面中显示媒资数据对应的画面。对于媒资数据的播放过程,虚拟现实设备500可以按照控制指令中指定的内容,以及查询获得的播放数据内容控制相应的媒资播放过程。
例如,在控制指令中提取的关键信息为当前媒资数据的片源类型为左右型3D片源,通过执行上述查询过程,则查询获得的播放数据中包括与当前3D片源相适应的播放模式为左右型3D模式,因此,基于控制指令和播放数据,确定播放过程为左右型3D片源执行播放。即针对媒资数据中每一帧图像画面,以竖直中轴线为界,将图像画面拆分为左眼图像和右眼图像两个部分,其中左眼图像发送到渲染场景中左眼相机可见的显示面板进行播放,右眼图像则发送到渲染场景中右眼相机可见的显示面板进行播放。
可见,在上述实施例中,虚拟现实设备500可以在获取用户输入的控制指令以后,从控制指令中提取关键信息,并使用关键信息在数据库中查询相适应的播放参数,从而按照播放参数播放媒资数据。如此,虚拟现实设备500可以自动完成播放模式或片源类型的选择,使播放的媒资数据可以按照最合适的方式进行播放,并获得多个场景下的播放效果。
由于上述实施例中虚拟现实设备500需要依赖数据库完成相关内容的查询和匹配,因此,在一些实施例中,虚拟现实设备500可以创建数据库并按照当前显示的播放界面内容实时更新数据库,即如图8所示,接收用户输入的用于播放媒资数据的控制指令的步骤前还包括:
S501:创建数据库;
S502:遍历当前播放界面中的媒资项目;
S503:提取每个媒资项目对应包含的片源类型;
S504:为每个所述片源类型设置相适应的播放模式,以建立映射关系表;
S505:将所述映射关系表存储至所述数据库。
虚拟现实设备500可以在初始控制系统中创建一个数据库,用于记载播放模式和片源类型之间的映射关系。数据库可以由虚拟现实设备500通过本地数据创建,也可以由虚拟现实设备500的服务商通过云服务的方式创建。
基于创建的数据库,在每次进入播放界面时,虚拟现实设备500可以通过遍历当前播放界面中所包含的媒资项目,并提取每个媒资项目对应的所有片源类型,以确保当前播放界面中所包含的媒资项目对应的信息能够在数据库中查询到。例如,当前播放界面中包括一个媒资列表,媒资列表中包括多个媒资项目,其中,每个媒资项目可以对应关联多个媒资文件,如媒资A对应包括两个媒资文件,即2D媒资文件和左右型3D媒资文件。在进入播放界面时,可以提取媒资A对应包含的片源类型为2D片源和左右型3D片源。
在提取片源类型后,虚拟现实设备500可以为提取到的每种片源类型设置对应的播放模式,即2D片源对应的播放模式为2D模式,左右型3D片源的播放模式为左右3D模式。并根据片源类型和播放模式建立映射关系表,其中所述映射关系表中包括多个片源类型的媒资文件地址和播放模式。
如此,通过对当前播放界面中所包含的所有媒资项目执行上述设置过程,则可以确定当前界面中所有的片源类型及其对应的播放模式,并以映射关系表的形式添加到数据库中。这一数据库维护过程可以使数据库中包含当前界面中媒资项目对应的映射关系,从而在后续查询过程中快速查询到片源类型或者播放模式。
在查询播放数据的过程中,由于一个媒资项目可以支持多种播放模式,因此媒资项目与播放模式之间存在着“一对多”/“多对一”的映射关系。其中,一对多映射关系是指从图片/视频资源角度来说,一个图片/视频有多种播放模式,即为一对多;多对一映射关系是指从播放模式来说,多个播放模式对应于一个图片/视频资源,即为多对一。基于这种一对多/多对一的映射关系,可以通过映射框架创建数据库。能够创建数据库的框架包括开放源代码的对象关系映射框架(Hibernate)、Java数据库连接(Java Database Connectivity,JDBC)以及持久层框架(MyBatis)等。
其中,MyBatis框架具有接口绑定功能,即包括注解绑定结构化查询语言(Structured Query Language,SQL)和可扩展标记语言绑定SQL。并且,MyBatis框架支持对象导航图语言(Object Graph Navigation Language,OGNL)表达式动态结构化查询语言。因此,MyBatis框架可以通过XML或注解方式灵活配置要运行的SQL语句,并将java对象和SQL语句映射生成最终执行的SQL,最后将SQL执行的结果再映射生成java对象。使得Mybatis框架的学习门槛低,数据库维护方可以直接编写原生态SQL,可严格控制SQL执行性能,灵活度高。
因此,在一些实施例中,图片/视频资源与播放模式之间存在着一对多/多对一的映射关系,可以选择MyBatis框架创建并维护数据库。即如图9、图10所示,将映射关系表存储至数据库的步骤,还包括:
S551:通过读取reader对象读取所述映射关系表;
S552:获取当前线程的SQL会话,以开启事务;
S553:通过所述SQL会话读取所述映射关系表中的操作编号,以及读取SQL语句;
S554:提交事务,以将所述映射关系表存储至所述数据库。
在使用Mybatis框架创建并维护数据库的过程中,可以通过Mybatis框架中的Reader对象读取MyBatis框架映射文件,即映射关系表,再通过SqlSessionFactoryBuilder对象创建SqlSessionFactory对象,并获取当前线程的 SQLSession。获取SQLSession后,可设置事务默认开启,以通过SQLSession读取映射文件中的操作编号,从而读取SQL语句,并提交事务,将映射关系表存储至数据库。
可见,本实施例中通过MyBatis框架和映射关系表可以快速建立媒资项目与播放模式之间一对多/多对一的映射关系,并且基于MyBatis框架的数据库不仅可以减少开发人员的工作量和学习内容,而且便于统一维护不同播放界面中的媒资项目,使虚拟现实设备500可以快速通过数据库查询到相适应的播放参数。
在一些实施例中,为了能够从控制指令中提取关键信息,虚拟现实设备500可以通过解析控制指令,获取待播放媒资数据;并从待播放媒资数据中提取关键词,以在查询播放数据的过程中,可以按照关键词为基础查询片源类型或播放模式。
由于虚拟现实设备500在使用关键信息查询播放数据时可以基于片源类型查询播放模式,也可以基于播放模式查询片源类型,因此所述关键词中包括待播放媒资数据的片源类型或者用户指定的播放模式。
例如,当用户点击打开文件管理界面中的一个媒资文件后,虚拟现实设备500在切换至播放界面的过程中,可以对媒资文件的格式、文件描述信息等进行提取,以获得关键词。通过这些关键词信息,可以确定用户的指定信息,即媒资文件的片源类型。
对于根据片源类型自动选择播放模式的播放过程,由于单纯的通过关键词信息可能无法准确判断用户指定的内容,即无法确定媒资文件的片源类型,因此虚拟现实设备500还可以通过图像识别算法,确定待播放媒资数据的片源类型,即如图11所示,从待播放媒资数据中提取关键词的步骤还包括:
S211:从所述媒资数据中提取多帧图像数据;
S212:将所述多帧图像数据输入图像识别模型;
S213:获取所述图像识别模型输出的所述片源类型。
为了从媒资数据中提取片源类型,虚拟现实设备500可以在提取关键信息的过程中,提取解析获得的待播放媒资数据,并从中提取出多帧图像数据。多帧图像数据可以为等时差间隔的多个图像帧,即多帧图像数据的图像画面之间具有显著差异,以使后续图像识别过程更加准确。
在提取多帧图像数据后,可以将多帧图像数据依次输入图像识别模型中,以通过图像识别模型判断图像数据中的画面排布方式。其中,图像识别模型可以依据机器学习算法,由样本数据训练获得的分类模型。即在建立初始模型后,通过将带有标签信息的样本图像输入至该模型,并获得模型输出的分类结果。再结合分类结果与标签信息之间的差异,反向传播,调整模型参数。经过大量样本的多次输入,即可获得具有一定分类准确率的分类模型。
图像识别模型还可以通过封装图像处理算法的方式建立。即在应用中,可以通过开发一系列图像识别程序,并通过这些图像识别程序对输入的图像数据执行图像识别,以确定图像中的画面排布方式。例如,图像识别程序可以对图像数据进行切分,以获得至少两部分,再通过图片相似度算法,计算两个部分的相似度数值,当相似度数值大于设定阈值时,则确定当前媒资数据为3D图像。
通过对多帧图像数据执行的图像识别,虚拟现实设备500可以获取待播放媒资的片源类型,即获得关键信息。可见,这样的关键信息获取方式可以缓解文件格式、描述信息等内容对关键信息的影响,使得虚拟现实设备500能够适用于绝大多数媒资文 件的片源类型识别过程,提高自动选择播放模式时的准确率。
由于在用户指定不同的内容时,从控制指令中提取的关键信息也不同,相应的在使用关键信息查询的播放数据也不同,因此,如图12所示,使用关键信息在数据库中查询播放数据的步骤还包括:
S310:获取所述控制指令中的用户指定信息;
S320:如果所述用户指定信息为指定片源类型,在所述数据库中查询目标播放模式;
S330:如果所述用户指定信息为指定播放模式,在所述数据库中查询目标片源类型。
虚拟现实设备500在获取用户输入的控制指令后,可以从控制指令中提取用户指定信息,即提取用户在执行交互动作时,明确选择的信息。例如,当用户选中任一左右型3D媒资项目进行播放时,用户指定信息为片源类型是左右型3D片源。而当用户在播放界面中依次选中“模式切换-3D模式(左右)”选项后,则用户指定信息为播放模式是左右型3D模式。
对于不同的用户指定信息,虚拟现实设备500可以从数据库中查询不同的播放数据。即如果用户指定信息为指定片源类型,在数据库中查询目标播放模式;如果用户指定信息为指定播放模式,在数据库中查询目标片源类型。其中,所述目标播放模式为与指定片源类型相适应的播放模式,所述目标片源类型为与指定播放模式相适应的片源类型。
例如,当用户指定信息为片源类型是左右型3D片源,则在数据库中查询的播放数据为相适应的播放模式为左右型3D模式。当用户指定信息为播放模式是左右型3D模式,则在数据库中查询当前媒资项目对应的左右型3D片源的地址,以后调用播放。
在查询到播放数据后,虚拟现实设备500可以执行对媒资数据的播放程序。为确保播放过程顺利进行,在播放媒资数据前应能够明确全部相关播放数据,例如待播放媒资项目,以及与待播放媒资项目相关的片源类型、播放模式以及片源地址等。例如,如果所述用户指定信息为指定播放模式,虚拟现实设备500可以先调用目标片源类型的媒资数据,并按照所指定的播放模式解析媒资数据,即虚拟现实设备500可以从片源地址获取媒资数据并进行解码等操作,以获取媒资数据对应的视频数据流或者具体的图片画面。
为了使显示器可以显示具体的画面内容,在解析媒资数据后,还需要将解析后的媒资数据发送至虚拟渲染场景,以通过渲染场景中的显示面板显示具体的媒资画面内容。显然,对于不同的播放模式,在渲染场景中可以通过不同的画面显示方式呈现,例如,对于左右型3D片源媒资,可以通过将每一帧图像画面的左侧部分和右侧部分分别发送给左眼相机可见的显示面板和右眼相机可见的显示面板,以分别向左右两侧的显示器输出媒资画面,形成3D效果。
其中,在调用目标片源类型的媒资数据过程中,虚拟现实设备500可以根据不同的媒资存储形式以不同的方式调用媒资数据,即如图13所示,在一些实施例中,调用目标片源类型的媒资数据的步骤,还包括:
S411:检测所述媒资数据的存储形式;
S412:如果所述存储形式为本地存储,在本地存储器中调用所述目标片源类型的 媒资数据;
S413:如果所述存储形式为网络存储,提取所述目标片源类型的媒资源地址;
S414:访问所述媒资源地址,以获取所述目标片源类型的媒资数据。
虚拟现实设备500可以通过媒资数据的文件大小、文件位置等信息检测媒资数据的存储形式。通常,当用户从文件管理界面跳转至播放界面的播放过程中,所播放的媒资数据存储形式为本地存储;而当用户从媒资推荐界面跳转至播放界面的过程中,所播放的媒资数据存储形式为网络存储。
在播放过程中,如果媒资数据的存储形式为本地存储,则在本地存储器中调用所述目标片源类型的媒资数据;如果存储形式为网络存储,则可以先提取目标片源类型的媒资源地址,再通过访问该媒资源地址,获取对应片源类型的媒资数据。
显然,对于同一个媒资项目,其不同资源类型对应的媒资数据可以一部分存储在本地中,一部分存储在网络中。例如,对于同一个媒资项目A,其对应包含两种片源形式,即2D形式和3D形式。其中,2D类型的媒资数据存储在本地存储器中,而3D类型的媒资数据存储在网络存储器中。当用户调用该媒资项目的媒资数据时,可以通过控制指令以及查询获得的播放数据确定待播放的片源类型,从而按照对应的片源类型调用媒资数据。例如,当查询获得播放数据中指定片源类型为3D片源时,虽然在本地存储器中存储有该媒资项目的2D片源,依然需要访问3D片源对应的媒资源地址,以获取媒资数据。
在一些实施例中,如果所述用户指定信息为指定播放模式,则可以在当前播放界面中的媒资列表与媒资数据地址之间建立一个映射关系,这个映射关系可以使用户点击媒资列表中的媒资项目时,自动访问指定播放模式下的媒资数据地址,以快速获得相适应的媒资数据。即虚拟现实设备500可以先提取当前播放界面中显示的媒资列表,再遍历媒资列表中各媒资项目的媒资数据地址,并在各媒资项目与目标片源类型的媒资数据地址之间建立映射关系,以使用户选择所述媒资列表中任一媒资项目时,获取所述目标片源类型的媒资数据。
在一些实施例中,虚拟现实设备500通过渲染场景获得上述界面对应的画面内容。其中,渲染场景是指由虚拟现实设备500渲染引擎通过渲染程序构建的一个虚拟场景。例如,如图14所示,基于unity 3D渲染引擎的虚拟现实设备500,可以在呈现显示画面时,构建一个unity 3D场景。在unity 3D场景中,可以添加各种虚拟物体和功能控件,以渲染出特定的使用场景。
为了输出渲染后的画面,虚拟现实设备500还可以在unity 3D场景中设置虚拟相机。例如,虚拟现实设备500可以按照用户双眼的位置关系,在unity 3D场景中设置左眼相机和右眼相机,两个虚拟相机可以同时对unity 3D场景中的物体进行拍摄,从而向左显示器和右显示器分别输出渲染画面。为了获得更好的沉浸感体验,两个虚拟相机在unity 3D场景中的角度可以随着虚拟现实设备500的位姿传感器实时调整,从而在用户佩戴虚拟现实设备500行动时,可以实时输出不同观看角度下的unity 3D场景中的渲染画面。
经过渲染场景输出的画面,可以被送入显示器中进行显示。在一些实施例中,由于虚拟现实设备500中设有光学组件,光学组件可以用于增加显示画面到用户双眼的光程,从而使用户能够看清楚显示器中显示的画面内容。光学组件可以由多个凸透镜 组成,通过多个凸透镜对显示器发出的光线进行折射,使用户能够看清所显示的画面内容,并获得沉浸感体验。
但是由于凸透镜的中间区域厚度较大,而在边缘区域的厚度较小,使得用户观看到的靠近透镜边缘位置的显示画面产生变形,即畸变。因此,对于部分虚拟现实设备500,可以在设定一个显示区域,显示区域内的画面可正常显示,而显示区域外的画面可以被隐藏,以缓解畸变对边缘画面的影响。例如,虚拟现实设备500可以基于从渲染场景中获得的画面内容,针对显示器尺寸设置初始显示区域为3840×2160的矩形区域内,该显示区域可以根据体验效果的统计结果确定,具有一定的经验性。再按照设定的显示区域形状,将显示区域以外的画面进行隐藏,即在显示器中,中部区域的画面内容可见,而边缘区域显示为纯黑图案。
其中,虚拟现实设备500在输出图像画面时的系统默认显示区域成为初始显示区域,按照初始显示区域进行显示时,可以满足多数用户要求,但是由于不同用户的双眼距离不同,使得部分用户的面部特征不适合按照初始显示区域观看显示画面,造成这些用户观看到的画面不清楚,降低用户体验。因此,为了适应不同用户的面部特征,在本申请的部分实施例中提供一种虚拟现实设备及VR画面显示方法。其中,所述虚拟现实设备500包括显示器和控制器,显示器用于显示各种用户界面;控制器用于处理虚拟现实设备500的各种使用参数和交互数据,以控制在显示器中呈现不同的画面内容,并且如图15所示,控制器可以被进一步配置为执行所述VR画面显示方法,包括:
S6:接收用户输入的用于设定显示区域的控制指令。
用户使用虚拟现实设备500时,可以根据需要输入各种控制指令,以实现用户与虚拟现实设备500之间的交互。当用户输入用于设定显示区域的控制指令时,虚拟现实设备500可以根据操作系统中预置的VR画面显示程序,运行设定显示区域的相关程序。
其中,用于设定显示区域的控制指令可以根据虚拟现实设备500的硬件配置和操作系统以不同的方式实现输入。例如,当用户首次使用虚拟现实设备500或者在恢复出厂设置以后的首次使用时,虚拟现实设备500可以默认用户输入了用于设定显示区域的控制指令。
而对于正常使用中的虚拟现实设备500,可以通过特定的入口启动设置显示区域相关程序。例如,可以在设置界面或者其他UI界面的状态栏中设置调整显示区域控件,用户可以在使用中,通过点击调整显示区域控件,运行设定显示区域的控制指令,即通过界面UI交互输入用于设定显示区域的控制指令。
用户还可以通过快捷键的特定交互动作输入用于设定显示区域的控制指令。虚拟现实设备500可以根据自身携带的按键情况,设置快捷键策略。例如,通过连续三次点击虚拟现实设备500的电源键,触发设置显示区域相关程序。即用于设定显示区域的控制指令为连续三次点击电源键的快捷键指令。
针对部分虚拟现实设备500,用户还可以借助其他交互设备或交互系统完成控制指令的输入。例如,可以在虚拟现实设备500中内置智能语音系统,用户可以通过麦克风等音频输入设备输入语音信息,如“设定显示区域”、“我看不清画面”等。智能语音系统通过对用户语音信息进行转化、分析、处理等方式识别语音信息的含义,并 根据识别结果生成控制指令。
S7:响应于所述控制指令,提取所述控制指令中指定的自定义区域。
用户在输入控制指令时,可以在控制指令中指定用户自定义区域。虚拟现实设备500在接收到用户输入的用于设定显示区域的控制指令后,可以从控制指令中提取用户指定的自定义区域。例如,当用户输入用于设定显示区域的控制指令时,可以在显示器中显示输入界面,用户可以在输入界面中点击要设定的显示区域的四个顶点,从而根据四个顶点坐标生成自定义区域。
用户还可以通过其他方式指定自定义区域,例如,显示是输入界面中可以包括文本框或数值滚动条,用户可以通过文本框输入双眼距离(瞳距),从而使虚拟现实设备500根据用户输入的数值自定匹配一个自定义区域。或者由用户直接输入自定义区域的宽度和高度,并通过输入的宽度和高度确定自定义区域。
在一些实施例中,还可以虚拟现实设备500预先设置多个选项,每个选项对应一个自定义区域,当用户输入用于设定显示区域的控制指令时,可以从设置的选项中选择相适应的自定义区域。例如,可以在用户输入用于设定显示区域的控制指令时,显示区域选择界面,在区域选择界面中,可以包括按年龄、性别、地域等条件设置的多个选项,每个选项对应有该条件下的自定义区域。
S8:按照所述自定义区域,对待显示画面执行缩放处理,以获得自定义区域图像。
根据用户在输入控制指令时指定的自定义区域,虚拟现实设备500可以对待显示画面执行缩放处理,以获得自定义区域图像。其中,当用户指定的自定义区域范围大于初始显示区域的范围时,可以对待显示画面进行放大处理;当用户指定的自定义区域范围小于初始显示区域范围时,可以对待显示画面进行缩小处理。
但由于用户自定义的区域通常没有固定的宽高比,因此在对待显示画面执行缩放处理时,可以针对用户自定义区域的宽度和高度,分别与初始显示区域的宽度和高度进行对比,以确定出缩放比例,从而按照确定的缩放比例对待显示画面进行等比例缩放。
S9:控制所述显示器显示所述自定义区域图像。
在对待显示画面执行缩放处理后,虚拟现实设备500可以将处理后的画面图像发送到显示器中进行显示。由于显示器中接收到的图像画面已经过缩放处理,因此最终呈现在显示器中的画面可以按照用户自定义区域进行显示,即显示的画面符合当前用户的面部特征,使用户能够观看到清晰的画面,并减轻观影过程中不舒适的观影体验。
如图16所示,在一些实施例中,为了获得自定义区域,接收用户输入的用于设定显示区域的控制指令的步骤中,所述控制器被进一步配置为:
S610:提取用户输入的显示区域顶点坐标;
S620:根据所述顶点坐标中的横坐标值计算宽度值,以及根据所述顶点坐标中的纵坐标值计算高度值;
S630:按照所述宽度值和所述高度值,生成自定义区域。
在接收到控制指令后,虚拟现实设备500可以控制显示器依次显示用于提示输入各顶点坐标的输入界面,并记录用户在输入界面中输入的顶点坐标。其中,所述顶点坐标包括左上角点坐标、左下角点坐标、右下角点坐标以及右上角点坐标。
例如,在设备首次使用(包括恢复出厂设置后)或者用户点击调整显示区域的图 标后,虚拟现实设备500会启动调整显示区域的应用。应用启动后,会显示如图17所示的网格图,以方便用户选择合适的舒适区。为了引导用户完成顶点坐标的输入,虚拟现实设备500可以首先提示用户点击左上角感觉舒适的点,并记录该点的坐标信息(X1,Y1);然后提示用户点击右上角感觉舒适的点,并记录该点的坐标信息(X2,Y2);接下来提示用户点击左下角感觉舒适的点,记录该点的坐标信息(X3,Y3);最后提示用户点击右下角感觉舒适的点,记录该点的坐标信息(X4,Y4)。通过四次提示,可以引导用户完成输入,并根据用户输入获得合适区域的坐标信息,如图18所示。
在用户输入顶点坐标后,可以根据顶点坐标计算自定义区域的宽度和高度,即根据顶点坐标中的横坐标值计算宽度值,以及根据顶点坐标中的纵坐标值计算高度值。例如,根据左上角和右上角顶点的横坐标计算宽度值W1=X2-X1;根据左上角和左下角顶点的纵坐标计算高度值H1=Y3-Y1。
最后根据计算获得宽度值和高度值,生成自定义区域。即自定义区域可以是宽度为W1,高度为H1的矩形区域。显然,还可以通过其他顶点坐标计算自定义区域的宽度值和高度值,例如根据左下角和右下角顶点的横坐标计算宽度值W2=X4-X3;根据右上角和右下角顶点的纵坐标计算高度值H2=Y4-Y2,并根据宽度值W2和高度值H2生成自定义区域。
同理,可以通过其他宽度值和高度值的组合,作为最终用于生成自定义区域的宽度值和高度值。例如,可以使用宽度值W1和高度值H2作为自定义区域的宽度和高度,也可以使用宽度值W2和高度值H1作为自定义区域的宽度和高度。具体采用的宽度和高度的组合方式可以根据顶点坐标的输入方式确定,或者根据顶点坐标的输入结果确定。
由于用户在输入界面输入的四个顶点坐标,因此在计算自定义宽度和高度时,还可以分别对顶部宽度和底部宽度进行计算,以及对左侧高度和右侧高度进行计算。在一些实施例中,可以计算左上角坐标与右上角坐标中横坐标之差,以获得第一宽度值,即W1,以及计算左下角坐标与右下角坐标的横坐标之差,以获得第二宽度值,即W2。同理,计算左上角坐标与左下角坐标中纵坐标之差,以获得第一高度值,即H1;以及计算右上角坐标与右下角坐标的纵坐标之差,以获得第二高度值,即H2。再根据第一宽度值和第二宽度值计算平均宽度值,作为自定义区域的宽度,即W=(W1+W2)/2;以及根据第一高度值和第二高度值计算平均高度值,作为自定义区域的高度,即H=(H1+H2)/2。
由于用户输入顶点坐标时是通过点击输入界面完成,受限于用户操作过程,所输入的顶点坐标可能过于不规则,导致无法识别出合适的自定义区域。对此,在用户输入顶点坐标后,还可以对坐标位置进行检测。即如图19所示,在一些实施例中,按照所述宽度值和所述高度值,生成自定义区域的步骤还包括:
S631:根据所述顶点坐标计算输入误差值;
S632:如果所述输入误差值超出设定的误差值范围,控制所述显示器显示用于提示用户重新输入顶点坐标的输入界面;
S633:如果所述输入误差值未超出设定的误差值范围,根据所述顶点坐标计算平均宽度值与平均高度值,以生成自定义区域。
其中,所述输入误差值包括根据横坐标计算获得的宽度误差值和根据纵坐标计算获得的高度误差值。虚拟现实设备500可以通过计算W1和W2之间的差值,生成宽度误差值。当宽度误差值超出设定的误差范围时,说明用户选择的区域是不合适的,该次的设置无效,因此可以提示用户重新选择显示的舒适区域;当宽度误差值在设定的误差范围内,则根据W1和W2计算选择区域的平均宽度值W=(W1+W2)/2,作为自定义区域的宽度。
同理,虚拟现实设备500还可以根据第一高度值与第二高度值生成高度误差值,即计算H1和H2之间的误差,当高度误差值超出设定的误差范围,说明用户选择的区域是不合适的,该次的设置无效,因此可以提示用户重新选择显示的舒适区域;当高度误差值在设定的误差范围内,则根据H1和H2计算选择区域的平均高度值H=(H1+H2)/2,作为自定义区域的高度。
可见,在本实施例中,虚拟现实设备500可以通过显示输入界面,并提示用户依次输入觉得合适的点,从而自定义显示区域。并且,通过对用户输入的顶点坐标进行检测,确定自定义区域的宽度和高度,从而便于用户输入符合显示区域规范的自定义区域。
在获得自定义区域的宽度和高度后,虚拟现实设备500可以按照计算获得的宽度和高度信息确定自定义区域的大小,而为了最终确定显示区域,还需要对自定义区域的位置进行设定,因此在一些实施例中,按照所述宽度值和所述高度值,生成自定义区域的步骤还包括:根据所述顶点坐标计算中心点坐标;并以所述中心点坐标为基准生成自定义区域。
其中,中心点坐标可以分别通过顶点坐标中横坐标值和纵坐标值计算获得。例如,中心点坐标(X’,Y’)中,横坐标X1’=(X1+X2)/2,或者X2’=(X3+X4)/2;纵坐标Y1’=(Y1+Y3)/2,或者Y2’=(Y1+Y3)/2。还可以根据两次计算获得坐标平均值,作为中心点坐标。即X’=(X1’+X2’)/2;Y’=(Y1’+Y2’)/2。
在计算获得中心点坐标以后,虚拟现实设备500可以中心点坐标为基准,生成自定义区域,即自定义区域的中心位于中心点坐标上,以使自定义区域中能够更多的保留原视频画面中的主要内容。
在提取用户自定义区域后,虚拟现实设备500对待显示画面执行缩放处理,以获得自定义区域图像。为此,如图20所示,在一些实施例中,对待显示画面执行缩放处理的步骤还包括:
S810:获取初始显示区域;
S820:对比所述初始显示区域与所述自定义区域,以生成缩放比例;
S830:按照所述缩放比例,对所述待显示画面执行缩放处理。
虚拟现实设备500可以在提取自定义区域后,可以提取待显示画面的初始显示区域,即系统默认的显示区域。并将初始显示区域与用户自定义区域进行对比,从而根据初始显示区域与自定义区域之间的对比结果,生成缩放比例。其中,所述缩放比例可以按照初始显示区域与用户自定义区域的宽度,和/或高度,以及面积等参数的对比结果生成,用于按照缩放比例,对待显示画面执行缩放处理。
例如,缩放比例可以通过计算初始显示区域宽度与自定义区域宽度的比值,以生成第一比例值,即计算VR应用设置的宽度Width和用户自定义区域宽度W的比值, 获得第一比例值Ratio1=W/Width。再计算初始显示区域高度与自定义区域高度的比值,以生成第二比例值,即计算VR应用设置的高度Height和用户自定义区域宽度H的比值,获得第二比例值Ratio2=H/Height。
为保证VR应用显示内容的宽高比例不变,宽度和高度需使用相同的缩放比例,再对比第一比例值和第二比例值,以确定第一比例值和第二比例值中用于执行缩放的比例值。为了保证所有的内容都显示在用户设置的舒适区域内,取Ratio1和Ratio2中较小的值确定为缩放比例Ratio。即如果第一比例值Ratio1大于或等于第二比例值Ratio2,确定第二比例值Ratio2为缩放比例;如果第一比例值Ratio1小于第二比例值Ratio2,确定第一比例值Ratio1为缩放比例。
最后,按照缩放比例,对待显示画面执行缩放处理。例如,虚拟现实设备500可以通过计算初始显示区域宽度与缩放比例的乘积获得实际显示宽度值,即计算VR应用实际显示的宽度值Wshow=Width×Ratio;通过计算初始显示区域高度与缩放比例的乘积获得实际显示高度值,即计算VR应用实际显示的高度值Hshow=Height×Ratio。从而按照实际显示宽度值与实际显示高度值,对待显示图像执行像素插值或合并算法,以生成自定义区域图像。
可见,通过上述实施例,虚拟现实设备500可以支持在用户使用时,根据观看的舒适度手动设置显示区域,从而达到最佳的观看效果。并且通过对比初始显示区域与自定义区域,生成缩放比例,从而通过缩放比例执行缩放处理,以保证VR应用显示内容的宽高比例不变。
本申请提供的实施例之间的相似部分相互参见即可,以上提供的具体实施方式只是本申请总的构思下的几个示例,并不构成本申请保护范围的限定。对于本领域的技术人员而言,在不付出创造性劳动的前提下依据本申请方案所扩展出的任何其他实施方式都属于本申请的保护范围。

Claims (19)

  1. 一种虚拟现实设备,包括:
    显示器,被配置为显示播放界面;
    控制器,被配置为:
    接收用户输入的用于播放媒资数据的控制指令;
    响应于所述控制指令,从所述控制指令中提取关键信息;
    使用所述关键信息在数据库中查询播放数据,所述播放数据包括播放模式和/或片源类型,所述数据库包括多种片源类型和多种播放模式之间的映射关系;
    按照所述播放数据,控制所述显示器在所述播放界面中显示所述媒资数据。
  2. 根据权利要求1所述的虚拟现实设备,接收用户输入的用于播放媒资数据的控制指令的步骤前,所述控制器被进一步配置为:
    创建数据库;
    遍历当前播放界面中的媒资项目;
    提取每个媒资项目对应包含的片源类型;
    为每个所述片源类型设置相适应的播放模式,以建立映射关系表,所述映射关系表中包括多个片源类型的媒资文件地址和播放模式;
    将所述映射关系表存储至所述数据库。
  3. 根据权利要求2所述的虚拟现实设备,将所述映射关系表存储至所述数据库的步骤中,所述控制器被进一步配置为:
    通过读取reader对象读取所述映射关系表;
    获取当前线程的SQL会话,以开启事务;
    通过所述SQL会话读取所述映射关系表中的操作编号,以及读取SQL语句;
    提交事务,以将所述映射关系表存储至所述数据库。
  4. 根据权利要求1所述的虚拟现实设备,从所述控制指令中提取关键信息的步骤中,所述控制器被进一步配置为:
    解析所述控制指令,以获取待播放媒资数据;
    从待播放媒资数据中提取关键词,所述关键词中包括待播放媒资数据的片源类型或者用户指定的播放模式。
  5. 根据权利要求4所述的虚拟现实设备,从待播放媒资数据中提取关键词的步骤中,所述控制器被进一步配置为:
    从所述媒资数据中提取多帧图像数据;
    将所述多帧图像数据输入图像识别模型;
    获取所述图像识别模型输出的所述片源类型。
  6. 根据权利要求1所述的虚拟现实设备,使用所述关键信息在数据库中查询播放数据的步骤中,所述控制器被进一步配置为:
    获取所述控制指令中的用户指定信息;
    如果所述用户指定信息为指定片源类型,在所述数据库中查询目标播放模式,所述目标播放模式为与所述指定片源类型相适应的播放模式;
    如果所述用户指定信息为指定播放模式,在所述数据库中查询目标片源类型,所 述目标片源类型为与所述指定播放模式相适应的片源类型。
  7. 根据权利要求6所述的虚拟现实设备,如果所述用户指定信息为指定播放模式,所述控制器被进一步配置为:
    调用所述目标片源类型的媒资数据;
    按照所述指定播放模式解析所述媒资数据;
    将解析后的所述媒资数据发送至虚拟渲染场景,以形成媒资画面。
  8. 根据权利要求7所述的虚拟现实设备,调用所述目标片源类型的媒资数据的步骤中,所述控制器被进一步配置为:
    检测所述媒资数据的存储形式;
    如果所述存储形式为本地存储,在本地存储器中调用所述目标片源类型的媒资数据;
    如果所述存储形式为网络存储,提取所述目标片源类型的媒资源地址;
    访问所述媒资源地址,以获取所述目标片源类型的媒资数据。
  9. 根据权利要求6所述的虚拟现实设备,如果所述用户指定信息为指定播放模式,所述控制器被进一步配置为:
    提取当前播放界面中显示的媒资列表;
    遍历所述媒资列表中各媒资项目的媒资数据地址;
    在各媒资项目与所述目标片源类型的媒资数据地址之间建立映射关系,以使用户选择所述媒资列表中任一媒资项目时,获取所述目标片源类型的媒资数据。
  10. 一种媒资播放方法,应用于虚拟现实设备,所述虚拟现实设备包括显示器和控制器,所述媒资播放方法包括:
    接收用户输入的用于播放媒资数据的控制指令;
    响应于所述控制指令,从所述控制指令中提取关键信息;
    使用所述关键信息在数据库中查询播放数据,所述播放数据包括播放模式和/或片源类型,所述数据库包括多种片源类型和多种播放模式之间的映射关系;
    按照所述播放数据,控制所述显示器在所述播放界面中显示所述媒资数据。
  11. 一种虚拟现实设备,包括:
    显示器;
    控制器,被配置为:
    接收用户输入的用于设定显示区域的控制指令;
    响应于所述控制指令,提取所述控制指令中指定的自定义区域;
    按照所述自定义区域,对待显示画面执行缩放处理,以获得自定义区域图像;
    控制所述显示器显示所述自定义区域图像。
  12. 根据权利要求11所述的虚拟现实设备,用于设定显示区域的控制指令在接收首次启动或用户点击调整显示区域控件的控制指令输入;接收用户输入的用于设定显示区域的控制指令的步骤中,所述控制器被进一步配置为:
    提取用户输入的显示区域顶点坐标;
    根据所述顶点坐标中的横坐标值计算宽度值,以及根据所述顶点坐标中的纵坐标值计算高度值;
    按照所述宽度值和所述高度值,生成自定义区域。
  13. 根据权利要求12所述的虚拟现实设备,按照所述宽度值和所述高度值,生成自定义区域的步骤中,所述控制器被进一步配置为:
    根据所述顶点坐标计算中心点坐标;
    以所述中心点坐标为基准生成自定义区域。
  14. 根据权利要求12所述的虚拟现实设备,提取用户输入的显示区域顶点坐标的步骤中,所述控制器被进一步配置为:
    在接收到所述控制指令后,控制所述显示器依次显示用于提示输入各所述顶点坐标的输入界面;
    记录用户在所述输入界面中输入的顶点坐标,所述顶点坐标包括左上角点坐标、左下角点坐标、右下角点坐标以及右上角点坐标。
  15. 根据权利要求14所述的虚拟现实设备,按照所述宽度值和所述高度值,生成自定义区域的步骤中,所述控制器被进一步配置为:
    根据所述顶点坐标计算输入误差值,所述输入误差值包括根据横坐标计算获得的宽度误差值和根据纵坐标计算获得的高度误差值;
    如果所述输入误差值超出设定的误差值范围,控制所述显示器显示用于提示用户重新输入顶点坐标的输入界面;
    如果所述输入误差值未超出设定的误差值范围,根据所述顶点坐标计算平均宽度值与平均高度值,以生成自定义区域。
  16. 根据权利要求15所述的虚拟现实设备,根据所述顶点坐标计算输入误差值的步骤中,所述控制器被进一步配置为:
    计算左上角坐标与右上角坐标中横坐标之差,以获得第一宽度值,以及计算左下角坐标与右下角坐标的横坐标之差,以获得第二宽度值;
    根据所述第一宽度值与所述第二宽度值生成宽度误差值,所述宽度误差值等于所述第一宽度值与所述第二宽度值的差值;
    计算左上角坐标与左下角坐标中纵坐标之差,以获得第一高度值,以及计算右上角坐标与右下角坐标的纵坐标之差,以获得第二高度值;
    根据所述第一高度值与所述第二高度值生成高度误差值,所述高度误差值等于所述第一高度值与所述第二高度值的差值。
  17. 根据权利要求11所述的虚拟现实设备,对待显示画面执行缩放处理的步骤中,所述控制器被进一步配置为:
    获取初始显示区域;
    对比所述初始显示区域与所述自定义区域,以生成缩放比例;
    按照所述缩放比例,对所述待显示画面执行缩放处理。
  18. 根据权利要求17所述的虚拟现实设备,对比所述初始显示区域与所述自定义区域,以生成缩放比例的步骤中,所述控制器被进一步配置为:
    计算所述初始显示区域宽度与所述自定义区域宽度的比值,以生成第一比例值;
    计算所述初始显示区域高度与所述自定义区域高度的比值,以生成第二比例值;
    对比所述第一比例值和第二比例值;
    如果所述第一比例值大于或等于所述第二比例值,确定所述第二比例值为所述缩放比例;
    如果所述第一比例值小于所述第二比例值,确定所述第一比例值为所述缩放比例。
  19. 根据权利要求17所述的虚拟现实设备,按照所述缩放比例,对所述待显示画面执行缩放处理的步骤中,所述控制器被进一步配置为:
    计算实际显示宽度值,所述实际显示宽度值为所述初始显示区域宽度与所述缩放比例的乘积;
    计算实际显示高度值,所述实际显示高度值为所述初始显示区域高度与所述缩放比例的乘积;
    按照所述实际显示宽度值与所述实际显示高度值,对所述待显示图像执行像素插值或合并算法,以生成自定义区域图像。
PCT/CN2022/078018 2021-03-16 2022-02-25 一种虚拟现实设备及媒资播放方法 WO2022193931A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110280647.4A CN114327033A (zh) 2021-03-16 2021-03-16 一种虚拟现实设备及媒资播放方法
CN202110280647.4 2021-03-16

Publications (1)

Publication Number Publication Date
WO2022193931A1 true WO2022193931A1 (zh) 2022-09-22

Family

ID=81044226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078018 WO2022193931A1 (zh) 2021-03-16 2022-02-25 一种虚拟现实设备及媒资播放方法

Country Status (2)

Country Link
CN (1) CN114327033A (zh)
WO (1) WO2022193931A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435825A (zh) * 2022-07-12 2024-01-23 中兴通讯股份有限公司 基于虚拟现实的导航方法、控制器以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110126160A1 (en) * 2009-11-23 2011-05-26 Samsung Electronics Co., Ltd. Method of providing 3d image and 3d display apparatus using the same
JP2016130985A (ja) * 2015-01-15 2016-07-21 セイコーエプソン株式会社 頭部装着型表示装置、頭部装着型表示装置を制御する方法、コンピュータープログラム
CN106792094A (zh) * 2016-12-23 2017-05-31 歌尔科技有限公司 Vr设备播放视频的方法和vr设备
CN107103638A (zh) * 2017-05-27 2017-08-29 杭州万维镜像科技有限公司 一种虚拟场景与模型的快速渲染方法
CN107580244A (zh) * 2017-07-31 2018-01-12 上海与德科技有限公司 裁切片源的方法、裁切片源的设备和终端
CN110572656A (zh) * 2019-09-19 2019-12-13 北京视博云科技有限公司 一种编码方法、图像处理方法、装置及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136874B2 (en) * 2002-10-16 2006-11-14 Microsoft Corporation Adaptive menu system for media players
US7685154B2 (en) * 2006-10-13 2010-03-23 Motorola, Inc. Method and system for generating a play tree for selecting and playing media content
CN102377964A (zh) * 2010-08-16 2012-03-14 康佳集团股份有限公司 电视中实现画中画的方法、装置及对应的电视机
CN103618913A (zh) * 2013-12-13 2014-03-05 乐视致新电子科技(天津)有限公司 在智能电视中播放3d片源的方法及装置
CN111209440B (zh) * 2020-01-13 2023-04-14 深圳市雅阅科技有限公司 一种视频播放方法、装置和存储介质
CN112333509B (zh) * 2020-10-30 2023-04-14 Vidaa美国公司 一种媒资推荐方法、推荐媒资的播放方法及显示设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110126160A1 (en) * 2009-11-23 2011-05-26 Samsung Electronics Co., Ltd. Method of providing 3d image and 3d display apparatus using the same
JP2016130985A (ja) * 2015-01-15 2016-07-21 セイコーエプソン株式会社 頭部装着型表示装置、頭部装着型表示装置を制御する方法、コンピュータープログラム
CN106792094A (zh) * 2016-12-23 2017-05-31 歌尔科技有限公司 Vr设备播放视频的方法和vr设备
CN107103638A (zh) * 2017-05-27 2017-08-29 杭州万维镜像科技有限公司 一种虚拟场景与模型的快速渲染方法
CN107580244A (zh) * 2017-07-31 2018-01-12 上海与德科技有限公司 裁切片源的方法、裁切片源的设备和终端
CN110572656A (zh) * 2019-09-19 2019-12-13 北京视博云科技有限公司 一种编码方法、图像处理方法、装置及系统

Also Published As

Publication number Publication date
CN114327033A (zh) 2022-04-12

Similar Documents

Publication Publication Date Title
CN110636353B (zh) 一种显示设备
CN106576184B (zh) 信息处理装置、显示装置、信息处理方法、程序和信息处理系统
CN113064684B (zh) 一种虚拟现实设备及vr场景截屏方法
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
WO2021135678A1 (zh) 生成剪辑模板的方法、装置、电子设备及存储介质
US9294670B2 (en) Lenticular image capture
CN112732089A (zh) 一种虚拟现实设备及快捷交互方法
US20150213784A1 (en) Motion-based lenticular image display
CN112866773B (zh) 一种显示设备及多人场景下摄像头追踪方法
WO2022193931A1 (zh) 一种虚拟现实设备及媒资播放方法
CN114302221B (zh) 一种虚拟现实设备及投屏媒资播放方法
CN113066189B (zh) 一种增强现实设备及虚实物体遮挡显示方法
CN114363705A (zh) 一种增强现实设备及交互增强方法
WO2022151882A1 (zh) 虚拟现实设备
WO2022151883A1 (zh) 虚拟现实设备
WO2022151864A1 (zh) 虚拟现实设备
CN116170624A (zh) 一种对象展示方法、装置、电子设备及存储介质
CN115129280A (zh) 一种虚拟现实设备及投屏媒资播放方法
CN114286077B (zh) 一种虚拟现实设备及vr场景图像显示方法
CN112905007A (zh) 一种虚拟现实设备及语音辅助交互方法
CN112732088B (zh) 一种虚拟现实设备及单目截屏方法
WO2022111005A1 (zh) 虚拟现实设备及vr场景图像识别方法
CN114327032A (zh) 一种虚拟现实设备及vr画面显示方法
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
WO2024131479A1 (zh) 虚拟环境的显示方法、装置、可穿戴电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770286

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22770286

Country of ref document: EP

Kind code of ref document: A1