WO2018196519A1 - 一种播放视频的方法和设备 - Google Patents

一种播放视频的方法和设备 Download PDF

Info

Publication number
WO2018196519A1
WO2018196519A1 PCT/CN2018/079843 CN2018079843W WO2018196519A1 WO 2018196519 A1 WO2018196519 A1 WO 2018196519A1 CN 2018079843 W CN2018079843 W CN 2018079843W WO 2018196519 A1 WO2018196519 A1 WO 2018196519A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
video
played
dimensional
resource file
Prior art date
Application number
PCT/CN2018/079843
Other languages
English (en)
French (fr)
Inventor
涂成义
尹海生
朱兴昌
孔建华
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018196519A1 publication Critical patent/WO2018196519A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens

Definitions

  • the present invention relates to virtual reality (VR, referred to as Virtual Reality) technology, and in particular, to a method and device for playing video.
  • VR virtual reality
  • Virtual Reality referred to as Virtual Reality
  • VR virtual reality
  • VR video can watch 360-degree panoramic video content according to the movement of the user's head; this requires that in the process of making VR video, each frame needs full-angle shooting, and finally synthesized. 360 degree panoramic picture. Therefore, the VR video is difficult to shoot, the production cost is high, and the cost is high, which results in a small number of VR videos, which limits the promotion and popularization of VR video.
  • the embodiment of the present invention is to provide a method and a device for playing a video, which can reduce the manufacturing cost of the VR video.
  • an embodiment of the present invention provides a method for playing a video, where the method includes:
  • the scene resource file includes: a projection picture combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information;
  • the player is loaded to the video play screen, and the to-be-played video is played in the player after the loading is completed.
  • the rendering is performed in a three-dimensional space according to the preset rendering policy, and the rendered three-dimensional scene is obtained, which specifically includes:
  • the panoramic picture of the scene resource is rendered in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the creating a video play screen in the rendered three-dimensional scene includes:
  • a video play screen is created in the rendered three-dimensional scene according to the screen position attribute information.
  • the loading the player to the video playing screen includes:
  • the player after the setting is completed is loaded to the video playing screen.
  • the method further includes:
  • the viewer's perspective is adjusted according to the playing state and the received control command.
  • the acquiring a corresponding scene resource file according to the to-be-played video includes:
  • the image combination corresponding to the three-dimensional scene model is obtained by projecting the three-dimensional scene model by using a preset projection algorithm, including:
  • the three-dimensional scene model is projected onto six faces of the cube, and six images are generated in order, so that six images can be seamlessly stitched and restored into a sky box model during rendering.
  • the acquiring a corresponding scene resource file according to the to-be-played video includes:
  • scenario resource file is downloaded to the video playback device locally according to the scenario identifier.
  • an embodiment of the present invention provides a video playing device, where the device includes: an acquiring module, a rendering module, a creating module, and a loading module;
  • the acquiring module is configured to acquire a corresponding scene resource file according to the to-be-played video; wherein the scene resource file includes: a projection image combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information;
  • the rendering module is configured to perform rendering in a three-dimensional space according to the preset rendering policy according to the scene resource file, to obtain a rendered three-dimensional scene;
  • the creating module is configured to create a video playing screen in the rendered three-dimensional scene
  • the loading module is configured to load a player to the video playing screen, and play the to-be-played video in a player after loading.
  • the rendering module is specifically configured as:
  • the panoramic picture of the scene resource is presented in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and the angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the creating module is specifically configured to create a video playing screen in the rendered three-dimensional scene according to the screen position attribute information.
  • the loading module is specifically configured as:
  • the device further includes: an adjustment module, configured to adjust the viewer's perspective according to the playing state and the received control command when playing the to-be-played video.
  • the acquiring module is specifically configured as:
  • the obtaining module is set to:
  • the three-dimensional scene model is projected onto six faces of the cube, and six images are generated in order, so that six images can be seamlessly stitched and restored into a sky box model during rendering.
  • the acquiring module is specifically configured as:
  • scenario resource file is downloaded to the video playback device locally according to the scenario identifier.
  • an embodiment of the present invention provides a video playback device, where the device includes: the device includes: a communication interface, a memory, a processor, and a bus;
  • the bus is configured to connect the communication interface, the processor and the memory, and mutual communication between the devices;
  • the communication interface is configured to perform data transmission with an external network element
  • the memory is configured to store instructions and data
  • the processor executes the instruction to: obtain a corresponding scene resource file according to the to-be-played video; wherein the scene resource file includes: a projection image combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information. ;
  • the player is loaded to the video play screen, and the to-be-played video is played in the player after the loading is completed.
  • the processor is specifically configured as:
  • the panoramic picture of the scene resource is rendered in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the processor is specifically configured as:
  • a video play screen is created in the rendered three-dimensional scene according to the screen position attribute information.
  • the processor is specifically configured as:
  • the player after the setting is completed is loaded to the video playing screen.
  • the processor is further configured to:
  • the viewer's perspective is adjusted according to the playing state and the received control command.
  • the processor is specifically configured as:
  • the processor is specifically configured to: project the three-dimensional scene model onto six faces of the cube, and generate six images in sequence, so that six images can be seamlessly spliced and restored into Sky box model.
  • the processor is specifically configured as:
  • the communication interface is instructed to download the scene resource file to the video playback device locally according to the scenario identifier.
  • a storage medium having stored therein a computer program, wherein the computer program is configured to execute the steps of any one of the method embodiments described above.
  • an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor being arranged to run the computer program to perform any of the above The steps in the method embodiments.
  • Embodiments of the present invention provide a method and a device for playing a video, which are used to render and render a two-dimensional (2D, 2-Dimensional) video through a three-dimensional (3D, 3-Dimensional) scene, thereby reducing VR video. production cost.
  • FIG. 1 is a schematic flowchart of a method for playing a video according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a sky box model according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a play scene according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of acquiring a scene resource file according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a scene selection page according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of obtaining a three-dimensional scene after rendering according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of loading a player according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a video playing device according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of another video playing device according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a specific hardware implementation of a video playing device according to an embodiment of the present invention.
  • a method for playing a video is provided.
  • the method may be applied to a playback device having a VR video playback function, such as a terminal, a set top box, and the like.
  • the method may include:
  • S101 Obtain a corresponding scene resource file according to the to-be-played video.
  • the scene resource file includes: a projection picture combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information;
  • S102 Perform rendering in a three-dimensional space according to the preset rendering policy according to the scenario resource file, to obtain a rendered three-dimensional scene;
  • S104 Load the player to the video playing screen, and play the to-be-played video in the player after the loading is completed.
  • the technical solution shown in FIG. 1 renders and renders the playback of two-dimensional (2D, 2-Dimensional) video through the established three-dimensional (3D, 3-Dimensional) scene, which reduces the manufacturing cost of the VR video.
  • the scenario resource file may be generated by a pre-production process.
  • the projection picture combination corresponding to the virtual three-dimensional scene model may specifically be six pictures in the sky box mode, and the manufacturing process may include: creating a virtual three-dimensional scene model by using a modeling tool such as 3dmax, unity3D, including a theater, a living room. And then acquiring the picture combination corresponding to the three-dimensional scene model by projecting the three-dimensional scene model by using a preset projection algorithm.
  • a modeling tool such as 3dmax, unity3D
  • the sky box method to convert the model into a picture, and project the 3D scene model to the 6 faces of the cube through the projection algorithm, and generate 6 in order.
  • the pictures can be restored to the sky box model by seamlessly stitching the six pictures in the presentation.
  • the specific sky box model is shown in Figure 2.
  • the process of making 6 pictures of the sky box it is also necessary to define the direction of the coordinates, the splicing order of the six pictures, the viewing attribute information of the position and the angle of view of the camera (ie, the viewer), and the position of the screen position such as the position of the playing screen. Information, as shown in Figure 3.
  • the center position of the space is taken as the origin, and the negative direction of the Z axis is the facing direction of the camera (observer), the screen, the camera position, the angle of view, and the like are defined, and the mode is provided to the terminal by using the attribute description file.
  • the attribute description file can be specifically a scene.property file, which can include the following:
  • # ⁇ including scene identification, name, preview image, etc.
  • # ⁇ Define the position of the screen, including the upper left corner and the lower right corner
  • the scene resource file package is generated by using the scene id, that is, the 001001 as the directory file and the attribute description file.
  • the symbol “#” indicates the description and comment corresponding to the content description file content and the parameter between the current symbol “#” and the next symbol “#”. It will be understood that in the context of the present application, the symbol “#” indicates that the content between the current symbol “#” and the next symbol “#” represents the corresponding parameter description and comment.
  • step S101 in an optional specific implementation manner, after the scenario resource file is created, the information may be provided.
  • Two acquisition methods local preset and remote download.
  • the local preset refers to making the scene resource file into a part of the content of the playback device installation package, and saves it locally on the playback device, generally as a default configuration.
  • Remote download refers to configuring different scenes according to the video content in advance, and downloading scene images according to the configuration before playing.
  • the Android android platform is used as an example, and the scene resource can be saved to the assert directory; the mobile operating system ios platform developed by Apple is taken as an example, and the scene resource can be saved to the project directory.
  • the installation package is installed together in the playback device.
  • the playback device can download the scenario resource through the file transfer protocol (FTP, the full name of File Transfer Protocol) and the hypertext transfer protocol (HTTP, called HyperText Transfer Protocol). Go to the playback device locally.
  • FTP file transfer protocol
  • HTTP HyperText Transfer Protocol
  • the corresponding scene resource file according to the to-be-played video described in step S101 may specifically include:
  • a scene to be played is configured with multiple scene resources, the user can select one of the scene resources for use; if a scene to be played is not configured with the scene resource, the scene resource preset by the playback device is directly started.
  • S1012 Search for the local preset scene resource file according to the scene identifier; if yes, start the local preset scene resource file; otherwise, go to S1013.
  • the playback device may first query whether the scene resource file exists in the local preset scene. If it exists, load the file directly. Otherwise, the scene resource file needs to be downloaded remotely according to the download address configured in the video metadata.
  • the playback device can display the scene selection page on the screen. As shown in FIG. 5, the preview screen of each scene is displayed. After receiving the selection instruction of the user for a certain scene resource, the corresponding scene is selected according to the selection instruction.
  • the rendering is performed in a three-dimensional space according to a preset rendering strategy, and the rendering is obtained.
  • the three-dimensional scene includes:
  • S1021 splicing the picture combinations in the scene resource file to obtain a panoramic picture of the scene resource.
  • S1022 The panoramic picture of the scene resource is presented in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • S1023 Set a position and a viewing angle of the viewer according to the viewing attribute information in the rendered three-dimensional scene.
  • the attribute description file scene.property may be read, and the six pictures are sequentially spliced into a sky box according to the direction of the sky box picture according to the content in the attribute description file. Then, the openGL rendering function is called to render the skybox in three-dimensional space and set the viewer's position and perspective.
  • creating a video play screen in the rendered three-dimensional scene may include:
  • a video play screen is created in the rendered three-dimensional scene according to the screen position attribute information.
  • the play screen surfaceview is created at the spatial location according to the screen position attribute information.
  • the layout file taking the android platform as an example, the layout file content of the surfaceview object is created as follows:
  • step S104 in an optional specific implementation manner, referring to FIG. 7 , loading the player to the video playing screen includes:
  • the android platform is taken as an example to create a native player, and set playback parameters of the player instance, such as size and shape, including cutting, stretching, etc.; finally loading onto the screen surfaceview.
  • the layout file contents corresponding to these processes are as follows:
  • AbsoluteLayout.LayoutParams absParams (AbsoluteLayout.LayoutParams)surfaceView.getLayoutParams();
  • the play link url can be set to start playing.
  • the video to be played can be displayed in the surfaceview area and integrated with the surrounding scene resource picture.
  • the viewer's perspective when playing a video to be played, the viewer's perspective can also be adjusted according to the playing state and the received control command.
  • the playback device when the playback device is a terminal device, the terminal may be integrated with a VR software development kit (SDK, fully called Software Development Kit), such as Google google's cardboard, Facebook Facebook Oculus, and these VR SDKs support wide-angle mode. Play and binocular mode playback.
  • SDK VR software development kit
  • Google google's cardboard such as Google google's cardboard, Facebook Facebook Oculus
  • these VR SDKs support wide-angle mode. Play and binocular mode playback.
  • the wide-angle mode refers to the VR scene being presented as a single screen, and the offset data collected by the gyroscope may be called, or the viewer's perspective in the virtual scene may be adjusted by the received gesture command control; and the binocular mode It refers to the VR scene being presented in the form of a dual screen.
  • the image is acquired according to the difference of the binocular angle of view; after the terminal is inserted into a VR device such as a VR helmet, the stereo effect can be presented.
  • the playback device is a set-top box
  • the set-top box cannot be viewed with the helmet, it is preferably displayed in a wide-angle mode; and since the set-top box does not support the gesture and the gyroscope function, the direction key of the remote controller can be used to adjust the perspective of the viewer in the virtual scene. .
  • This embodiment provides a method for playing video, which renders and presents 2D video playback through the established 3D scene, thereby reducing the manufacturing cost of the VR video.
  • the device 80 may include: an acquisition module 801, a rendering module 802, a creation module 803, and loading. Module 804; wherein
  • the acquiring module 801 is configured to acquire a corresponding scene resource file according to the to-be-played video, where the scene resource file includes: a projection image combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information;
  • the rendering module 802 is configured to perform rendering in a three-dimensional space according to the preset rendering policy according to the scene resource file, to obtain a rendered three-dimensional scene;
  • the creating module 803 is configured to create a video playing screen in the rendered three-dimensional scene
  • the loading module 804 is configured to load the player to the video playing screen and play the to-be-played video in the player after the loading is completed.
  • the rendering module 802 is specifically configured to:
  • the panoramic picture of the scene resource is presented in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and the angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the creating module 803 is specifically configured to create a video playing screen in the rendered three-dimensional scene according to the screen position attribute information.
  • the loading module 804 is specifically configured to:
  • the device 80 further includes: an adjustment module 805, configured to adjust the viewer's perspective according to the playing state and the received control command when playing the to-be-played video.
  • the obtaining module 801 is specifically configured to:
  • the obtaining module 801 is specifically configured to:
  • the three-dimensional scene model is projected onto six faces of the cube, and six images are generated in order, so that six images can be seamlessly stitched and restored into a sky box model during rendering.
  • the obtaining module 801 is specifically configured to:
  • scenario resource file is downloaded to the video playback device locally according to the scenario identifier.
  • each functional module in this embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
  • the integrated unit may be stored in a computer readable storage medium if it is implemented in the form of a software function module and is not sold or used as a stand-alone product.
  • the technical solution of the embodiment is essentially Said that the part contributing to the prior art or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium, comprising a plurality of instructions for making a computer device (may It is a personal computer, a server, or a network device, etc. or a processor that performs all or part of the steps of the method described in this embodiment.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the computer program instruction corresponding to the method for playing video in the embodiment may be stored on a storage medium such as an optical disk, a hard disk, a USB disk, or the like, and a computer corresponding to a method for playing video in the storage medium.
  • a storage medium such as an optical disk, a hard disk, a USB disk, or the like
  • the program instructions are read or executed by an electronic device, the following steps are included:
  • the scene resource file includes: a projection picture combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and screen position attribute information;
  • the player is loaded to the video play screen, and the to-be-played video is played in the player after the loading is completed.
  • the step of storing in the storage medium is: performing the rendering in the three-dimensional space according to the preset rendering policy according to the scenario resource file, and obtaining the rendered three-dimensional scene, specifically:
  • the panoramic picture of the scene resource is rendered in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the step of storing in the storage medium is: creating a video play screen in the rendered three-dimensional scene, specifically:
  • a video play screen is created in the rendered three-dimensional scene according to the screen position attribute information.
  • the step of storing in the storage medium is: loading the player to the video playing screen, specifically:
  • the player after the setting is completed is loaded to the video playing screen.
  • the step of storing in the storage medium the method further includes:
  • the viewer's perspective is adjusted according to the playing state and the received control command.
  • the step that is stored in the storage medium is: the scenario resource file corresponding to the to-be-played video, specifically:
  • the step of storing the image in the storage medium the image combination corresponding to the three-dimensional scene model is obtained by projecting the three-dimensional scene model by using a preset projection algorithm, including:
  • the three-dimensional scene model is projected onto six faces of the cube, and six images are generated in order, so that six images can be seamlessly stitched and restored into a sky box model during rendering.
  • the step that is stored in the storage medium is: the scenario resource file corresponding to the to-be-played video, specifically:
  • scenario resource file is downloaded to the video playback device locally according to the scenario identifier.
  • a specific hardware implementation diagram of a video playback device 80 may include: a communication interface 1001, a memory 1002, a processor 1003, and a bus 1004; ,
  • the bus 1004 is configured to connect the communication interface 1001, the processor 1003, and the memory 1002 and mutual communication between the devices;
  • the communication interface 1001 is configured to perform data transmission with an external network element
  • the memory 1002 is configured to store instructions and data
  • the processor 1003 executes the instruction to: obtain a corresponding scene resource file according to the to-be-played video; wherein the scene resource file includes: a projection image combination corresponding to the virtual three-dimensional scene model, viewing attribute information, and a screen position attribute information;
  • the player is loaded to the video play screen, and the to-be-played video is played in the player after the loading is completed.
  • the memory 1002 may be a volatile memory, such as a random access memory (RAM), or a non-volatile memory, such as a read only memory.
  • RAM random access memory
  • ROM Read-Only Memory
  • flash memory hard disk (HDD, Hard Disk Drive) or solid state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and to the processor 1003 Provide instructions and data.
  • HDD Hard Disk Drive
  • SSD Solid-State Drive
  • the processor 1003 may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), or a Programmable Logic Device (PLD). At least one of a Programmable Logic Device, a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is to be understood that, for the different devices, the electronic device for implementing the above-mentioned processor functions may be other, which is not specifically limited in the embodiment of the present invention.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • CPU Central Processing Unit
  • controller a controller
  • microcontroller a microcontroller
  • the processor 1003 is specifically configured to:
  • the panoramic picture of the scene resource is rendered in a three-dimensional space according to a preset rendering function, to obtain a rendered three-dimensional scene;
  • the position and angle of view of the viewer are set according to the viewing attribute information in the rendered three-dimensional scene.
  • the processor 1003 is specifically configured to:
  • a video play screen is created in the rendered three-dimensional scene according to the screen position attribute information.
  • the processor 1003 is specifically configured to:
  • the player after the setting is completed is loaded to the video playing screen.
  • the processor 1003 is further configured to:
  • the viewer's perspective is adjusted according to the playing state and the received control command.
  • the processor 1003 is specifically configured to:
  • the processor 1003 is specifically configured to:
  • the three-dimensional scene model is projected onto six faces of the cube, and six images are generated in order, so that six images can be seamlessly stitched and restored into a sky box model during rendering.
  • the processor 1003 is specifically configured to:
  • the communication interface is instructed to download the scene resource file to the video playback device locally according to the scenario identifier.
  • Embodiments of the present invention also provide an electronic device comprising a memory and a processor having a computer program stored therein, the processor being arranged to execute a computer program to perform the steps of any of the method embodiments described above.
  • the electronic device may further include a transmission device and an input and output device, wherein the transmission device is connected to the processor, and the input and output device is connected to the processor.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the method and device for playing video provided by the embodiments of the present invention have the following beneficial effects: rendering a two-dimensional (2D, 2-Dimensional) video by using a three-dimensional (3D, 3-Dimensional) scene. And presentation, reducing the cost of making VR video.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本发明实施例公开了一种播放视频的方法和设备;该方法可以包括:根据待播放视频获取对应的场景资源文件;根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;在所述渲染后的三维场景中创建视频播放屏幕;将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。

Description

一种播放视频的方法和设备 技术领域
本发明涉及虚拟现实(VR,全称为Virtual Reality)技术,尤其涉及一种播放视频的方法和设备。
背景技术
仿真技术和计算机图形学的发展,以及硬件设备的更新换代,为虚拟现实(VR,全称为Virtual Reality)技术的应用提供了坚实的技术基础。VR视频作为VR娱乐行业的第一梯队,引领VR技术的不断突破和应用。
VR视频与传统视频的区别在于:VR视频可以根据用户头部的移动来观看360度全景的视频内容;这就要求在制作VR视频的过程中,每一帧都需要全角度拍摄画面,最终合成360度的全景图片。因此,VR视频的拍摄难度大,制作成本高,费用昂贵,导致VR视频的数量很少,限制了VR视频的推广和普及。
发明内容
为解决上述技术问题,本发明实施例期望提供一种播放视频的方法和设备,能够降低VR视频的制作成本。
本发明的技术方案是这样实现的:
第一方面,本发明实施例提供了一种播放视频的方法,所述方法包括:
根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
在所述渲染后的三维场景中创建视频播放屏幕;
将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
在上述方案中,所述根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景,具体包括:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
在上述方案中,所述在所述渲染后的三维场景中创建视频播放屏幕,具体包括:
根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
在上述方案中,所述将播放器装载至所述视频播放屏幕,具体包括:
创建原生播放器;
设置所述播放器的播放参数;
将设置完成后的播放器装载至所述视频播放屏幕。
在上述方案中,所述方法还包括:
在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
在上述方案中,所述根据待播放视频获取对应的场景资源文件,具体包括:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维 场景模型对应的图片组合。
在上述方案中,所述通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合,包括:
将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
在上述方案中,所述根据待播放视频获取对应的场景资源文件,具体包括:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;
若是,则启动本地预设的场景资源文件;
若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
第二方面,本发明实施例提供了一种视频播放设备,所述设备包括:获取模块、渲染模块、创建模块和装载模块;其中,
所述获取模块,设置为根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
所述渲染模块,设置为根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
所述创建模块,设置为在所述渲染后的三维场景中创建视频播放屏幕;
所述装载模块,设置为将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
在上述方案中,所述渲染模块,具体设置为:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
以及,根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
以及,在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
在上述方案中,所述创建模块,具体设置为根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
在上述方案中,所述装载模块,具体设置为:
创建原生播放器;
以及,设置所述播放器的播放参数;
以及,将设置完成后的播放器装载至所述视频播放屏幕。
在上述方案中,所述设备还包括:调整模块,设置为在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
在上述方案中,所述获取模块,具体设置为:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
在上述方案中,所述获取模块,设置为:
将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
在上述方案中,所述获取模块,具体设置为:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;若是,则启动本地预设的场景资源文件;
若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
第三方面,本发明实施例提供了一种视频播放设备,所述设备包括:所述设备包括:通信接口、存储器、处理器和总线;其中,
所述总线设置为连接所述通信接口、所述处理器和所述存储器以及这些器件之间的相互通信;
所述通信接口,设置为与外部网元进行数据传输;
所述存储器,设置为存储指令和数据;
所述处理器执行所述指令设置为:根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
在所述渲染后的三维场景中创建视频播放屏幕;
将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
在上述方案中,所述处理器,具体设置为:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
在上述方案中,所述处理器,具体设置为:
根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
在上述方案中,所述处理器,具体设置为:
创建原生播放器;
设置所述播放器的播放参数;
将设置完成后的播放器装载至所述视频播放屏幕。
在上述方案中,所述处理器,还设置为:
在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
在上述方案中,所述处理器,具体设置为:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
在上述方案中,所述处理器,具体设置为:将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
在上述方案中,所述处理器,具体设置为:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;
若是,则启动本地预设的场景资源文件;
若否,则根据场景标识指示所述通信接口下载场景资源文件到所述视频播放设备本地。
根据本发明的又一个实施例,还提供了一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本发明的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述 计算机程序以执行上述任一项方法实施例中的步骤。
本发明实施例提供了一种播放视频的方法和设备,通过建立的三维(3D,3-Dimensional)场景对二维(2D,2-Dimensional)视频的播放进行渲染和呈现,降低了VR视频的制作成本。
附图说明
图1为本发明实施例提供的一种播放视频的方法流程示意图;
图2为本发明实施例提供的一种天空盒模型示意图;
图3为本发明实施例提供的一种播放场景构造示意图;
图4为本发明实施例提供的一种获取场景资源文件的流程示意图;
图5为本发明实施例提供的一种场景选择页面示意图;
图6为本发明实施例提供的一种获得渲染后的三维场景的流程示意图;
图7为本发明实施例提供的一种装载播放器的流程示意图;
图8为本发明实施例提供的一种视频播放设备结构示意图;
图9为本发明实施例提供的另一种视频播放设备结构示意图;
图10为本发明实施例提供的一种视频播放设备的具体硬件实现示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
实施例一
参见图1,其示出了本发明实施例提供的一种播放视频的方法,该方法可以应用于具有VR视频播放功能的播放设备,例如终端、机顶盒等设备,该方法可以包括:
S101:根据待播放视频获取对应的场景资源文件;
其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
S102:根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
S103:在所述渲染后的三维场景中创建视频播放屏幕;
S104:将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
图1所示的技术方案通过建立的三维(3D,3-Dimensional)场景对二维(2D,2-Dimensional)视频的播放进行渲染和呈现,降低了VR视频的制作成本。
针对图1所示的技术方案,对于步骤S101,在一种可选的具体实现方式中,所述场景资源文件可以通过预先的制作过程来生成得到。详细来说,虚拟的三维场景模型对应的投影图片组合具体可以是天空盒方式的6个图片,其制作过程可以包括:通过3dmax,unity3D等建模工具创建虚拟的三维场景模型,包括影院,客厅等;随后通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。但是由于3D模型的数据较大,不适合网络下载和传输,因此优选采用天空盒的方式,将模型转化成图片,通过投影算法,将三维场景模型投影到立方体6个面,并按序生成6个图片,从而使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型,具体的天空盒模型如图2所示。在制作天空盒的6个图片的过程中,还需要定义坐标的方向,6个图片的拼接顺序,摄像头(即观看者)的位置和视角等观看属性信息,以及播放屏幕的位置等屏幕位置属性信息,如图3所示。
例如,以空间的中心位置为原点,以Z轴的负方向为摄像头(观察者)的面对方向,定义屏幕,摄像头位置,视角等,并采用属性描述文件的模式提供给终端。属性描述文件具体可以为scene.property文件,可以包含以下内容:
#基本信息,包括场景标识、名称、预览图片等
id=001001
name=xx
preview=xxx
#以下定义6个图片位置
front=1.jpg;back=2.jpg;top=3.jpg;bottom=4.jpg,left=5.jpg,right=6.jpg
#以下定义摄像头(观看者)初始位置和视角范围
camera=x,y,z;
angle=60
#以下定义屏幕的位置,包括左上角,右下角
screenlefttop=x,y,z
screenrightbottom=x,y,z
将图片文件和上述的属性描述文件以场景id即001001为目录,生成场景资源文件包。值得注意的是,在上述属性描述文件的内容中,符号“#”表示当前符号“#”与下一个符号“#”之间的属性描述文件内容及参量所对应的说明和注释。可以理解地,在本申请的后续上下文中,符号“#”表示当前符号“#”与下一个符号“#”之间的内容均表示对应的参量说明和注释。
需要说明的是,上述具体实现方式可以在视频播放设备中进行实现,还能够在视频播放设备以外的其他设备或服务器中实现。
当上述具体实现方式在视频播放设备以外的其他设备或服务器中实现时,还需要说明的是:对于步骤S101,在一种可选的具体实现方式中,当场景资源文件制作完成后,可以提供两种获取方式:本地预设和远程下载两种获取方式。简单来说,本地预设是指将场景资源文件制作成播放设备安装包内容的一部分,保存在播放设备本地,一般作为默认配置。远程下载是指预先根据视频内容,配置不同的场景,播放前根据配置下载场景 图片。
当场景资源保存在播放设备本地时,以安卓android平台为例,可以将场景资源保存到assert目录下;以苹果公司开发的移动操作系统ios平台为例,场景资源可以保存到工程目录下,随安装包一起安装在播放设备中。
而由于安装包的大小限制,无法将所有的场景资源预设到安装包中,而且对于本地预设的场景资源,一旦用户安装版本后,就无法增量更新场景,因此,还可以提供远程下载的获取方式,具体地可以将场景资源进行网络发布后,播放设备可以通过文件传输协议(FTP,全称为File Transfer Protocol),超文本传输协议(HTTP,全称为HyperText Transfer Protocol)等协议下载场景资源到播放设备本地。
因此,参见图4,步骤S101所述的根据待播放视频获取对应的场景资源文件,具体可以包括:
S1011:在播放待播放视频前,解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
可以理解地,如果一个待播放视频配置了多个场景资源,用户可以选择其中一个场景资源进行使用;如果一个待播放视频没有配置场景资源,则直接启动播放设备本地预设的场景资源。
S1012:根据场景标识查找播放设备是否存在本地预设的场景资源文件;如果有,则启动本地预设的场景资源文件;否则转至S1013。
S1013:根据场景标识下载场景资源文件到播放设备本地。
详细来说,播放设备在播放视频时,首先可以解析视频元数据中的场景配置。例如:scene=001001,这表示该视频中包含一个场景。
播放设备可以首先查询本地预设的场景中是否存在该场景资源文件,如果存在则直接加载,否则需要根据视频元数据中所配置的下载地址远程下载该场景资源文件。
此外,一个视频节目也可以配置多个场景,例如:scene=001001,001002,001003,001004;分别表示影院场景、客厅场景、太空场景1和太空场景2。此时播放设备可以在屏幕显示场景选择页面,如图5所示,展示每个场景的预览画面,当接收到用户针对某个场景资源的选择指令后,根据选择指令选择对应的场景。
针对图1所示的技术方案,对于步骤S102,在一种可选的具体实现方式中,参见图6,根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景,具体包括:
S1021:将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
S1022:根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
S1023:在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
可选地,根据前述举例,当播放设备获取到场景文件后,可以读取属性描述文件scene.property,并根据属性描述文件中的内容将6个图片按照天空盒图片的方向顺序拼接成天空盒,随后调用openGL渲染功能,将天空盒在三维空间中呈现出来,并设置观看者的位置和视角。
针对图1所示的技术方案,对于步骤S103,在一种可选的具体实现方式中,在所述渲染后的三维场景中创建视频播放屏幕,具体可以包括:
根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
可选地,根据前述举例,根据屏幕位置属性信息,在空间位置创建播放屏幕surfaceview。例如,在布局文件中,以android平台为例,创建surfaceview对象的布局文件内容如下所示:
<SurfaceView android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/surfaceview"
/>
针对图1所示的技术方案,对于步骤S104,在一种可选的具体实现方式中,参见图7,将播放器装载至所述视频播放屏幕,具体包括:
S1041:创建原生播放器;
S1042:设置所述播放器的播放参数;
S1043:将设置完成后的播放器装载至视频播放屏幕。
具体地,根据前述举例,以android平台为例,创建原生播放器,并设置播放器实例的播放参数,例如大小和形状,包括切割,拉伸等;最终装载到屏幕surfaceview上。这些过程对应的布局文件内容如下所示:
#获取surfaceview对象
surface=(SurfaceView)findViewById(R.id.surface);
surfaceHolder=surface.getHolder();
#创建播放器
MediaPlayer player=new MediaPlayer();
#装载播放器
player.setDisplay(surfaceHolder);
#设置播放器大小
AbsoluteLayout.LayoutParams   absParams   =(AbsoluteLayout.LayoutParams)surfaceView.getLayoutParams();
#左上角x轴位置
absParams.x=x;
#左上角y轴位置
absParams.y=y;
#宽度
absParams.width=width;
#高度
absParams.height=height;
#播放器装载布局
surfaceView.setLayoutParams(absParams);
可以理解地,通过上述过程创建并装载播放器后,可以设置播放链接url,启动播放。此时待播放视频就能够在该surfaceview区域展示,并与周围的场景资源图片融为一体。
针对图1所示的技术方案,在播放待播放视频时,还可以按照播放状态以及接收到的控制指令调整所述观看者的视角。
可选地,当播放设备为终端设备时,终端中可以集成VR软件开发工具包(SDK,全称为Software Development Kit),如谷歌google的cardboard,脸书facebook的Oculus,这些VR SDK均支持广角模式播放以及双目模式播放。
详细来说,广角模式指的是VR场景以单屏的形成呈现,可以调用陀螺仪采集的偏移数据,或者通过接收到的手势指令控制来调整虚拟场景中观看者的视角;而双目模式指的是VR场景以双屏的形式呈现,根据双目视角差别,采集图像;将终端插入VR头盔等VR设备后,能够呈现立体效果。
当播放设备为机顶盒时,由于机顶盒无法配合头盔观看,因此,优选采用广角模式展示;而且由于机顶盒不支持手势和陀螺仪功能,因此,可以采用遥控器的方向键调整虚拟场景中观看者的视角。
本实施例提供了一种播放视频的方法,通过建立的3D场景对2D视频的播放进行渲染和呈现,降低了VR视频的制作成本。
实施例二
基于前述实施例相同的技术构思,参见图8,其示出了本发明实施例提供的一种视频播放设备80,所述设备80可以包括:获取模块801、渲染模块802、创建模块803和装载模块804;其中,
所述获取模块801,设置为根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
所述渲染模块802,设置为根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
所述创建模块803,设置为在所述渲染后的三维场景中创建视频播放屏幕;
所述装载模块804,设置为将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
在上述方案中,所述渲染模块802,具体设置为:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
以及,根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
以及,在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
在上述方案中,所述创建模块803,具体设置为根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
在上述方案中,所述装载模块804,具体设置为:
创建原生播放器;
以及,设置所述播放器的播放参数;
以及,将设置完成后的播放器装载至所述视频播放屏幕。
在上述方案中,参见图9,所述设备80还包括:调整模块805,设置为在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
在上述方案中,所述获取模块801,具体设置为:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
在上述方案中,所述获取模块801,具体设置为:
将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
在上述方案中,所述获取模块801,具体设置为:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;若是,则启动本地预设的场景资源文件;
若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
另外,在本实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器) 执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
具体来讲,本实施例中的一种播放视频的方法对应的计算机程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种播放视频的方法对应的计算机程序指令被一电子设备读取或被执行时,包括如下步骤:
根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
在所述渲染后的三维场景中创建视频播放屏幕;
将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
可选的,存储介质中存储的与步骤:所述根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景,具体包括:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
可选的,存储介质中存储的与步骤:所述在所述渲染后的三维场景中 创建视频播放屏幕,具体包括:
根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
可选的,存储介质中存储的与步骤:所述将播放器装载至所述视频播放屏幕,具体包括:
创建原生播放器;
设置所述播放器的播放参数;
将设置完成后的播放器装载至所述视频播放屏幕。
可选的,存储介质中存储的与步骤:所述方法还包括:
在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
可选的,存储介质中存储的与步骤:所述根据待播放视频获取对应的场景资源文件,具体包括:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
可选的,存储介质中存储的与步骤:所述通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合,包括:
将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
可选的,存储介质中存储的与步骤:所述根据待播放视频获取对应的场景资源文件,具体包括:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;
若是,则启动本地预设的场景资源文件;
若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
实施例三
基于前述实施例相同的技术构思,参见图10、为本发明实施例提供的一种视频播放设备80的具体硬件实现示意图,可以包括:通信接口1001、存储器1002、处理器1003和总线1004;其中,
所述总线1004设置为连接所述通信接口1001、所述处理器1003和所述存储器1002以及这些器件之间的相互通信;
所述通信接口1001,设置为与外部网元进行数据传输;
所述存储器1002,设置为存储指令和数据;
所述处理器1003执行所述指令设置为:根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
在所述渲染后的三维场景中创建视频播放屏幕;
将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
在实际应用中,上述存储器1002可以是易失性存储器(volatile memory),例如随机存取存储器(RAM,Random-Access Memory);或者非易失性存储器(non-volatile memory),例如只读存储器(ROM,Read-Only Memory),快闪存储器(flash memory),硬盘(HDD,Hard Disk Drive)或固态硬盘(SSD,Solid-State Drive);或者上述种类的存储器的组合,并向处理器1003提供指令和数据。
上述处理器1003可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数 字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、中央处理器(CPU,Central Processing Unit)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本发明实施例不作具体限定。
示例性地,所述处理器1003,具体设置为:
将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
示例性地,所述处理器1003,具体设置为:
根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
示例性地,所述处理器1003,具体设置为:
创建原生播放器;
设置所述播放器的播放参数;
将设置完成后的播放器装载至所述视频播放屏幕。
示例性地,所述处理器1003,还设置为:
在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
示例性地,所述处理器1003,具体设置为:
通过建模工具创建虚拟的三维场景模型;
通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
示例性地,所述处理器1003,具体设置为:
将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
示例性地,所述处理器1003,具体设置为:
解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
根据场景标识查找播放设备是否存在本地预设的场景资源文件;
若是,则启动本地预设的场景资源文件;
若否,则根据场景标识指示所述通信接口下载场景资源文件到所述视频播放设备本地。
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
可选地,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算 机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。
工业实用性
如上所述,本发明实施例提供的一种播放视频的方法和设备具有以下有益效果:通过建立的三维(3D,3-Dimensional)场景对二维(2D,2-Dimensional)视频的播放进行渲染和呈现,降低了VR视频的制作成本。

Claims (26)

  1. 一种播放视频的方法,所述方法包括:
    根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
    根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
    在所述渲染后的三维场景中创建视频播放屏幕;
    将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
  2. 根据权利要求1所述的方法,其中,所述根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景,具体包括:
    将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
    根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
    在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
  3. 根据权利要求1所述的方法,其中,所述在所述渲染后的三维场景中创建视频播放屏幕,具体包括:
    根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
  4. 根据权利要求1所述的方法,其中,所述将播放器装载至所述视频播放屏幕,具体包括:
    创建原生播放器;
    设置所述播放器的播放参数;
    将设置完成后的播放器装载至所述视频播放屏幕。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
  6. 根据权利要求1所述的方法,其中,所述根据待播放视频获取对应的场景资源文件,具体包括:
    通过建模工具创建虚拟的三维场景模型;
    通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
  7. 根据权利要求6所述的方法,其中,所述通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合,包括:
    将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
  8. 根据权利要求1所述的方法,其中,所述根据待播放视频获取对应的场景资源文件,具体包括:
    解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
    根据场景标识查找播放设备是否存在本地预设的场景资源文件;
    若是,则启动本地预设的场景资源文件;
    若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
  9. 一种视频播放设备,所述设备包括:获取模块、渲染模块、创建模块和装载模块;其中,
    所述获取模块,设置为根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
    所述渲染模块,设置为根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
    所述创建模块,设置为在所述渲染后的三维场景中创建视频播放屏幕;
    所述装载模块,设置为将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
  10. 根据权利要求9所述的设备,其中,所述渲染模块,具体设置为:
    将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
    以及,根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
    以及,在所述渲染后的三维场景中根据所述观看属性信息设置观 看者的位置和视角。
  11. 根据权利要求9所述的设备,其中,所述创建模块,具体设置为根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
  12. 根据权利要求9所述的设备,其中,所述装载模块,具体设置为:
    创建原生播放器;
    以及,设置所述播放器的播放参数;
    以及,将设置完成后的播放器装载至所述视频播放屏幕。
  13. 根据权利要求9所述的设备,其中,所述设备还包括:调整模块,设置为在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
  14. 根据权利要求9所述的设备,其中,所述获取模块,具体设置为:
    通过建模工具创建虚拟的三维场景模型;
    通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
  15. 根据权利要求14所述的设备,其中,所述获取模块,设置为:
    将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
  16. 根据权利要求9所述的设备,其中,所述获取模块,具体设 置为:
    解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
    根据场景标识查找播放设备是否存在本地预设的场景资源文件;若是,则启动本地预设的场景资源文件;
    若否,则根据场景标识下载场景资源文件到所述视频播放设备本地。
  17. 一种视频播放设备,所述设备包括:所述设备包括:通信接口、存储器、处理器和总线;其中,
    所述总线设置为连接所述通信接口、所述处理器和所述存储器以及这些器件之间的相互通信;
    所述通信接口,设置为与外部网元进行数据传输;
    所述存储器,设置为存储指令和数据;
    所述处理器执行所述指令设置为:根据待播放视频获取对应的场景资源文件;其中,所述场景资源文件包括:虚拟的三维场景模型对应的投影图片组合,观看属性信息以及屏幕位置属性信息;
    根据所述场景资源文件按照预设的渲染策略在三维空间中进行渲染,获得渲染后的三维场景;
    在所述渲染后的三维场景中创建视频播放屏幕;
    将播放器装载至所述视频播放屏幕,并在装载完成后的播放器中播放所述待播放视频。
  18. 根据权利要求17所述的设备,其中,所述处理器,具体设 置为:
    将场景资源文件中的所述图片组合进行拼接,得到所述场景资源的全景图片;
    根据预设的渲染功能将所述场景资源的全景图片在三维空间中进行呈现,得到渲染后的三维场景;
    在所述渲染后的三维场景中根据所述观看属性信息设置观看者的位置和视角。
  19. 根据权利要求17所述的设备,其中,所述处理器,具体设置为:
    根据所述屏幕位置属性信息在所述渲染后的三维场景中创建视频播放屏幕。
  20. 根据权利要求17所述的设备,其中,所述处理器,具体设置为:
    创建原生播放器;
    设置所述播放器的播放参数;
    将设置完成后的播放器装载至所述视频播放屏幕。
  21. 根据权利要求17所述的设备,其中,所述处理器,还设置为:
    在播放所述待播放视频时,按照播放状态以及接收到的控制指令调整所述观看者的视角。
  22. 根据权利要求17所述的设备,其中,所述处理器,具体设置为:
    通过建模工具创建虚拟的三维场景模型;
    通过预设的投影算法,将所述三维场景模型通过投影后获取所述三维场景模型对应的图片组合。
  23. 根据权利要求22所述的设备,其中,所述处理器,具体设置为:将所述三维场景模型投影到立方体6个面,并按序生成6个图片,使得在呈现时能够将6个图片再无缝拼接还原成天空盒模型。
  24. 根据权利要求17所述的设备,其中,所述处理器,具体设置为:
    解析待播放视频与场景资源之间的对应关系,获取待播放视频对应的场景标识;
    根据场景标识查找播放设备是否存在本地预设的场景资源文件;
    若是,则启动本地预设的场景资源文件;
    若否,则根据场景标识指示所述通信接口下载场景资源文件到所述视频播放设备本地。
  25. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至8任一项中所述的方法。
  26. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至8任一项中所述的方法。
PCT/CN2018/079843 2017-04-27 2018-03-21 一种播放视频的方法和设备 WO2018196519A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710289347.6 2017-04-27
CN201710289347.6A CN107071557A (zh) 2017-04-27 2017-04-27 一种播放视频的方法和设备

Publications (1)

Publication Number Publication Date
WO2018196519A1 true WO2018196519A1 (zh) 2018-11-01

Family

ID=59605085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079843 WO2018196519A1 (zh) 2017-04-27 2018-03-21 一种播放视频的方法和设备

Country Status (2)

Country Link
CN (1) CN107071557A (zh)
WO (1) WO2018196519A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071557A (zh) * 2017-04-27 2017-08-18 中兴通讯股份有限公司 一种播放视频的方法和设备
CN107612912B (zh) * 2017-09-20 2022-02-25 北京京东尚科信息技术有限公司 一种设置播放参数的方法和装置
CN107728997B (zh) * 2017-10-31 2020-09-11 万兴科技股份有限公司 一种视频播放器渲染系统
CN108052206B (zh) * 2018-01-05 2021-08-13 重庆爱奇艺智能科技有限公司 一种视频播放方法、装置及电子设备
US11030796B2 (en) * 2018-10-17 2021-06-08 Adobe Inc. Interfaces and techniques to retarget 2D screencast videos into 3D tutorials in virtual reality
CN111696396A (zh) * 2019-03-12 2020-09-22 兴万信息技术(上海)有限公司 一种vr模拟培训教学系统
CN111935534B (zh) * 2020-07-30 2022-09-27 视伴科技(北京)有限公司 一种回放录制视频的方法及装置
CN112235585B (zh) * 2020-08-31 2022-11-04 江苏视博云信息技术有限公司 一种虚拟场景的直播方法、装置及系统
CN114286142B (zh) * 2021-01-18 2023-03-28 海信视像科技股份有限公司 一种虚拟现实设备及vr场景截屏方法
WO2022151883A1 (zh) * 2021-01-18 2022-07-21 海信视像科技股份有限公司 虚拟现实设备
CN113207037B (zh) * 2021-03-31 2023-04-07 影石创新科技股份有限公司 全景视频动画的模板剪辑方法、装置、终端、系统及介质
CN113206992A (zh) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 一种转换全景视频投影格式的方法及显示设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203646A1 (en) * 2015-01-14 2016-07-14 Hashplay Inc. System and method for providing virtual reality content
CN106201396A (zh) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 一种数据展示方法及装置、虚拟现实设备与播放控制器
CN106408631A (zh) * 2016-09-30 2017-02-15 厦门亿力吉奥信息科技有限公司 三维宏观展示方法及系统
CN107071557A (zh) * 2017-04-27 2017-08-18 中兴通讯股份有限公司 一种播放视频的方法和设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101741B (zh) * 2016-07-26 2020-12-15 武汉斗鱼网络科技有限公司 在网络视频直播平台上观看全景视频的方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203646A1 (en) * 2015-01-14 2016-07-14 Hashplay Inc. System and method for providing virtual reality content
CN106201396A (zh) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 一种数据展示方法及装置、虚拟现实设备与播放控制器
CN106408631A (zh) * 2016-09-30 2017-02-15 厦门亿力吉奥信息科技有限公司 三维宏观展示方法及系统
CN107071557A (zh) * 2017-04-27 2017-08-18 中兴通讯股份有限公司 一种播放视频的方法和设备

Also Published As

Publication number Publication date
CN107071557A (zh) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2018196519A1 (zh) 一种播放视频的方法和设备
US11488355B2 (en) Virtual world generation engine
CN107590771B (zh) 具有用于在建模3d空间中投影观看的选项的2d视频
WO2017088491A1 (zh) 一种视频的播放方法和装置
KR102221937B1 (ko) 임의의 뷰 생성
US20130265306A1 (en) Real-Time 2D/3D Object Image Composition System and Method
TW201921919A (zh) 影像處理裝置及檔案生成裝置
KR20150129260A (ko) 오브젝트 가상현실 콘텐츠 서비스 시스템 및 방법
US20180053531A1 (en) Real time video performance instrument
US10163250B2 (en) Arbitrary view generation
US20160350955A1 (en) Image processing method and device
CN109636917A (zh) 三维模型的生成方法、装置、硬件装置
JP2012141753A5 (zh)
KR101934799B1 (ko) 파노라마 영상을 이용하여 새로운 컨텐츠를 생성하는 방법 및 시스템
KR20170139202A (ko) 파노라마 영상을 이용하여 새로운 컨텐츠를 생성하는 방법 및 시스템
TWI817273B (zh) 即時多視像視訊轉換方法和系統
CN110914871A (zh) 获取三维场景的方法与装置
US9715900B2 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content
US20230072261A1 (en) Computer system for rendering event-customized audio content, and method thereof
KR102533209B1 (ko) 다이나믹 확장현실(xr) 콘텐츠 생성 방법 및 시스템
KR101572347B1 (ko) 라이브 홀로 박스를 위한 서비스 장치 및 서비스 방법
US11972522B2 (en) Arbitrary view generation
KR102412595B1 (ko) 3d 캐릭터를 활용한 특수촬영물 제작 서비스 제공 방법 및 장치
TWI828575B (zh) 環景佈景生成系統及其控制方法
JP7378960B2 (ja) 画像処理装置、画像処理システム、画像生成方法、および、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18790210

Country of ref document: EP

Kind code of ref document: A1