WO2013034030A1 - 从断点处播放视频的方法和设备 - Google Patents

从断点处播放视频的方法和设备 Download PDF

Info

Publication number
WO2013034030A1
WO2013034030A1 PCT/CN2012/078605 CN2012078605W WO2013034030A1 WO 2013034030 A1 WO2013034030 A1 WO 2013034030A1 CN 2012078605 W CN2012078605 W CN 2012078605W WO 2013034030 A1 WO2013034030 A1 WO 2013034030A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
video
virtual
angle
information
Prior art date
Application number
PCT/CN2012/078605
Other languages
English (en)
French (fr)
Inventor
石腾
张园园
惠宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2013034030A1 publication Critical patent/WO2013034030A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2747Remote storage of video programs received via the downstream path, e.g. from the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4333Processing operations in response to a pause request

Definitions

  • Embodiments of the present invention relate to the field of video technology, and more particularly, to a method and apparatus for playing video from a breakpoint. Background technique
  • 3D video Conventional two-dimensional video is a single-view video stream that captures content through a camera and encodes the generated video stream for storage or transmission.
  • 3D video 3D video
  • 3D video is developing from multi-view video to free viewpoint video.
  • a typical multi-view video system includes the following contents: 1.
  • the MVC (Mul t iview video coding) code stream forms an operation point (0P) from a sub-flow consisting of a NAL (Network Abstract Layer) packet of a certain view and its view. , Operat ion Point ).
  • the OP can be decoded independently.
  • breakpoints continue to be an important experience in video systems.
  • the user can save the bookmark at any time while watching the program, and realize the "cross-screen breakpoint continuous viewing" of the program content.
  • the bookmark can be stored in the web server, and the information recorded in the bookmark ensures that the user can accurately locate the breakpoint after watching the program on the same device or other device, and save the trouble of dragging the time progress bar to find the viewing point.
  • a user can select one or a set of optimal viewing angles when viewing a video.
  • the terminal can only request access to all multi-view video content or basic view video content starting from the viewing point for the user to view, the user needs Re-switching the viewing angle to find the best viewing angle results in a lack of user experience and increases the complexity of the operation.
  • the embodiment of the invention provides a method and a device for playing video from a breakpoint, which can improve the viewing experience of the user watching the video.
  • a method for playing a video from a breakpoint including: acquiring a bookmark, the bookmark being created according to an indication of setting a breakpoint; acquiring time information of the video carried by the bookmark, and corresponding to the time information Viewing information of the played video; playing the video from the breakpoint indicated by the time information according to the angle of view indicated by the view information.
  • the perspective information may include a perspective identifier corresponding to a real perspective of the real camera.
  • playing the video from the breakpoint indicated by the time information according to the perspective represented by the view information including: requesting access to a video stream corresponding to the view identifier of the real view; from the breakpoint Play the video stream.
  • the perspective information may include an angle of a virtual perspective synthesized with reference to at least two real perspectives.
  • the broadcast is performed from a breakpoint indicated by the time information according to the angle of view represented by the view information.
  • the at least two reference view identifiers of the virtual view are respectively determined according to the angle of the virtual view, where the at least two reference view identifiers respectively correspond to at least two real view angles for synthesizing the virtual view.
  • the synthesized video content is played at the breakpoint.
  • Determining the at least two reference view identifiers of the virtual view according to the view of the virtual view may include: acquiring at least two reference view identifiers corresponding to the virtual view from the view information.
  • the determining the at least two reference view identifiers of the virtual view according to the angle information of the virtual view may include: acquiring an electronic program guide EPG metadata, where the EPG metadata carries the view description information of the multi-view video, where The view description information of the multi-view video includes calibration data of the real camera corresponding to the real view, or a correspondence relationship between the real view and the angle of the virtual view; the calibration data or the real view of the real camera corresponding to the real view Corresponding relationship with the angle of the virtual perspective, and determining the reference perspective identifier according to the angle information of the virtual perspective.
  • a method for playing a video from a breakpoint including: receiving, by the first terminal, an indication of setting a breakpoint; the first terminal creating a bookmark according to the indication, where the bookmark carries time information of the video, And the view information of the play video corresponding to the time information; the first terminal sends the bookmark, so that the second terminal plays from the breakpoint indicated by the time information according to the angle of view indicated by the view information Video, the first terminal and the second terminal are the same terminal or different terminals.
  • the view information may include a view identifier corresponding to a real view of the real camera, so that the second terminal requests to access a video stream corresponding to the view identifier of the real view; and play the video stream from the breakpoint .
  • the perspective information may include an angle of the virtual perspective that is synthesized by referring to the at least two real perspectives, so that the second terminal determines, according to the angle of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, where the at least Two reference perspective identifiers respectively corresponding to the synthesis site Depicting at least two views of the real view of the virtual view; requesting access to the video stream corresponding to the at least two reference view identifiers; and synthesizing the video streams corresponding to the at least two reference view identifiers into corresponding The synthesized video content is played from the breakpoint for the video content of the virtual perspective.
  • the view information may further include at least two reference view identifiers corresponding to the virtual view, the at least two reference views, so that the second The terminal requests access to the video stream corresponding to the at least two reference view identifiers, and synthesizes the video stream corresponding to the at least two reference view identifiers into video content corresponding to the virtual view, from the The synthesized video content is played at the breakpoint.
  • a method for playing a video from a breakpoint including: receiving a second terminal request for a bookmark, the bookmark carrying time information of the video, and viewing angle information of the played video corresponding to the time information Sending the bookmark to the second terminal, so that the second terminal plays the video from the breakpoint indicated by the time information according to the perspective represented by the view information.
  • the view information may include a view identifier corresponding to a real view of the real camera; and the second terminal may play the video from the breakpoint indicated by the time information according to the view angle indicated by the view information.
  • the second terminal requests to access a video stream corresponding to the view identifier of the real view, and plays the video stream from the breakpoint.
  • the view information may include an angle of a virtual view synthesized by referring to at least two real views; the second terminal is at a breakpoint represented by the time information according to a view angle represented by the view information.
  • the playing video is specifically: the second terminal determines, according to the angle of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, where the at least two reference perspective requests access corresponding to the at least two reference perspective identifiers a video stream; synthesizing the video stream corresponding to the at least two reference view identifiers into video content corresponding to the virtual view, and playing the synthesized video content from the breakpoint.
  • the view information may include an angle of the virtual view and at least two reference view identifiers corresponding to the virtual view, where the at least two reference view identifiers respectively correspond to Viewing, by the second terminal, a view angle of at least two real views of the virtual view, wherein the second terminal plays the video from the breakpoint indicated by the time information according to the view angle indicated by the view information:
  • the second terminal requests to access a video stream corresponding to the at least two reference view identifiers, and synthesizes the video stream corresponding to the at least two reference view identifiers into video content corresponding to the virtual view.
  • the synthesized video content is played from the breakpoint.
  • the method may further include: generating electronic program guide EPG metadata carrying view description information of the multi-view video, the multi-view video
  • the view description information includes calibration data of the real camera corresponding to the real view, or includes a correspondence between the real view and the angle of the virtual view, so that the second terminal according to the real camera's calibration data or the real view and the real view Determining a correspondence of angles of the virtual perspective, and determining the reference perspective identifier according to the angle information of the virtual perspective.
  • the method may also include: storing the bookmark received from the first terminal.
  • a terminal device including: an acquiring unit, configured to acquire a bookmark, where the bookmark is created according to an instruction to set a breakpoint; a parsing unit, configured to acquire time information of a video carried by the bookmark, and The viewing angle information of the playback video corresponding to the time information; the playing unit, configured to play the video from the breakpoint indicated by the time information according to the viewing angle represented by the viewing angle information.
  • the play unit may request to access a video stream corresponding to the view identifier of the real view, and from the breakpoint Play the video stream.
  • the playing unit may determine at least two reference view identifiers of the virtual view according to the angle of the virtual view.
  • the at least two reference view identifiers respectively correspond to the video streams identified by the at least two reference view points, and synthesize the video streams corresponding to the at least two reference view identifiers into a video corresponding to the virtual view Content, playing the synthesized video content from the breakpoint.
  • the playing unit may acquire at least two reference view identifiers corresponding to the angle of the virtual perspective from the view information.
  • the playback unit may acquire the electronic program guide EPG metadata, where the EPG metadata carries the view description information of the multi-view video, where the view description information of the multi-view video includes the calibration data of the real camera corresponding to the real view. Or the corresponding relationship between the real view angle and the angle of the virtual view; the calibration data of the real camera corresponding to the real view or the corresponding relationship between the real view angle and the angle of the virtual view, and the angle information according to the virtual view , determining the reference perspective identifier.
  • a terminal device including: an indication receiving unit, configured to receive an indication of setting a breakpoint; a bookmark creating unit, configured to create a bookmark according to the indication received by the indication receiving unit, where the bookmark carries a video Time information, and view information of the play video corresponding to the time information; a bookmark transfer unit, configured to send the bookmark, so that the second terminal is represented by the time information according to the perspective represented by the view information The video is played at the breakpoint, and the terminal device and the second terminal are the same terminal or different terminals.
  • the view information carried in the bookmark created by the bookmark creating unit may include a view identifier corresponding to a real view of the real camera, so that the second terminal requests to access a video stream corresponding to the view identifier of the real view; The video stream is played at the breakpoint.
  • the perspective information may include an angle of the virtual perspective that is synthesized by referring to the at least two real perspectives, so that the second terminal determines, according to the angle of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, where the at least The two reference view identifiers respectively correspond to view angle identifiers for synthesizing at least two real views of the virtual view; requesting access to a video stream corresponding to the at least two reference view identifiers; and The two reference views identify that the corresponding video stream is synthesized into video content corresponding to the virtual perspective, and the synthesized video content is played from the breakpoint.
  • the view information may include an angle of the virtual view and at least two reference view identifiers corresponding to the virtual view, where the at least two reference view identifiers respectively correspond to at least two used to synthesize the virtual view.
  • the view of the real view is identified, so that the second terminal requests access to the video stream corresponding to the at least two reference view identifiers, and synthesizes the video stream corresponding to the at least two reference view identifiers to correspond to The virtual perspective of the video content, from The synthesized video content is played at the breakpoint.
  • an apparatus for playing a video from a breakpoint including: a receiving unit, configured to receive a request of a second terminal for a bookmark, the bookmark carrying time information of the video, and corresponding to the time information And a sending unit, configured to send the bookmark to the second terminal, so that the second terminal plays the video from the breakpoint indicated by the time information according to the perspective represented by the perspective information.
  • the view information carried in the bookmark sent by the sending unit may include a view identifier corresponding to a real view of the real camera, so that the second terminal requests to access a video stream corresponding to the view identifier of the real view, and The video stream is played at the breakpoint.
  • the perspective information may include an angle of the virtual perspective that is synthesized by referring to the at least two real perspectives, so that the second terminal determines, according to the angle of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, At least two reference view identifiers respectively corresponding to view angle identifiers for synthesizing at least two real view angles of the virtual view; requesting access to a video stream corresponding to the at least two reference view identifiers; The two reference views identify that the corresponding video stream is synthesized into video content corresponding to the virtual perspective, and the synthesized video content is played from the breakpoint.
  • the view information may include an angle of the virtual view and at least two reference view identifiers corresponding to the virtual view, where the at least two reference view identifiers respectively correspond to at least two used to synthesize the virtual view.
  • the view of the real view is identified, so that the second terminal requests access to the video stream corresponding to the at least two reference view identifiers, and synthesizes the video stream corresponding to the at least two reference view identifiers to correspond to
  • the video content of the virtual perspective plays the synthesized video content from the breakpoint.
  • the device may further include a generating unit, configured to generate electronic program guide EPG metadata that carries the view description information of the multi-view video, where the view description information of the multi-view video includes calibration data of the real camera corresponding to the real view, or includes Corresponding relationship between the real viewing angle and the angle of the virtual viewing angle, so that the second terminal according to the real camera corresponding to the real camera calibration data or the relationship between the real viewing angle and the angle of the virtual viewing angle, and according to the virtual perspective Angle information, determining the reference perspective identifier.
  • the apparatus may further include a storage unit for storing the bookmark received from the first terminal.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be viewed according to the angle of view represented by the view information, thereby improving the viewing experience of the user watching the video.
  • FIG. 1 is a flow chart of a method of playing a video from a breakpoint in accordance with one embodiment of the present invention.
  • FIG. 2 is a flow chart of a method of playing a video from a breakpoint in accordance with another embodiment of the present invention.
  • 3 is a flow chart of a method of playing a video from a breakpoint in accordance with another embodiment of the present invention.
  • 4 is a schematic flow chart of a process of playing a video from a breakpoint according to an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of a process of playing video from a breakpoint according to another embodiment of the present invention.
  • Figure 6 is a block diagram of a terminal device in accordance with one embodiment of the present invention.
  • Figure 7 is a block diagram of a terminal device in accordance with another embodiment of the present invention.
  • 8 is a block diagram of an apparatus for implementing playback of a video from a breakpoint, in accordance with one embodiment of the present invention.
  • 9 is a block diagram of an apparatus for implementing playback of a video from a breakpoint in accordance with another embodiment of the present invention.
  • FIG. 1 is a flow chart of a method of playing a video from a breakpoint in accordance with one embodiment of the present invention.
  • the method of FIG. 1 can be performed by a terminal device (hereinafter referred to as a "first terminal") that sets a breakpoint. 101.
  • the first terminal receives an indication of setting a breakpoint.
  • the indication may be generated according to a user's key operation, for example, the user directly selects to create a bookmark from the menu.
  • the indication may be generated by the system according to other trigger conditions, for example, the system may create a bookmark by itself when the user performs an exit video operation.
  • the embodiment of the present invention does not limit the indication, and any existing indication manner may be adopted.
  • the first terminal creates a bookmark according to the indication, where the bookmark carries time information of the video, and perspective information of the played video corresponding to the time information.
  • the time information may indicate a playing time of the video when the breakpoint is set
  • the viewing angle information may indicate a viewing angle of the playing video when the breakpoint is set.
  • the angle of view represented by the angle of view information may be a virtual perspective that corresponds to a real perspective of a real camera or a plurality of real perspectives. The user can select any one or a set of optimal viewing angles when playing the video.
  • the perspective selected by the user may be a real perspective or a virtual perspective synthesized through multiple real perspectives.
  • the first terminal may record information of a view selected by the user, and carry information of the view, that is, the view information, in the bookmark.
  • the perspective information may include a perspective identifier of the real perspective (e.g., as Viewld).
  • the perspective information may include an angle of the virtual perspective (for example, referred to as ViewAng le ).
  • the perspective information may further include a reference perspective identifier corresponding to the virtual perspective (for example, referred to as ReferenceViewId).
  • the reference perspective identifier may correspond to a perspective identifier for synthesizing the true perspective of the virtual perspective.
  • the first terminal sends a bookmark, so that the second terminal plays the video from the breakpoint indicated by the time information according to the perspective represented by the view information.
  • the first terminal and the second terminal may be the same terminal or different terminals, which is not limited by the embodiment of the present invention.
  • the first terminal and the second terminal may be two terminals belonging to the same user, and the user can view the same video content on the two terminals, for example, the user watches the video on the first terminal, and exits the video for some reason.
  • the letter recorded in the bookmark when viewing The information is viewed on the first terminal or the second terminal.
  • the first terminal and the second terminal may be terminals belonging to different users, and the first terminal may set a breakpoint at the content that is considered interesting for other users to watch.
  • the first terminal and the second terminal are the same terminal, and the same user or different users may choose to view the same video content on the terminal.
  • the first terminal can upload and store the created bookmark to the web server. Since the view information corresponding to the time information indicating the breakpoint is recorded in the bookmark, the user can obtain the bookmark from the web server and view the view information in the bookmark when the user watches the video next time (either on the same terminal or on different terminals). The perspective of the representation continues to watch the video.
  • the first terminal may also directly send the bookmark to the second terminal.
  • the user of the first terminal wants to share the video content with the user of the second terminal
  • the user can be recommended to the user of the second terminal, and the second terminal acquires the bookmark and continues to view the video according to the perspective indicated by the perspective information in the bookmark.
  • the corresponding program identification may also be included in the bookmark.
  • the bookmark may be stored in association with the program identification for subsequent supply to the terminal requesting the bookmark.
  • the bookmark may further include a user identifier of the terminal that records the bookmark, and when the bookmark is stored, the web server may store the bookmark in association with the user identifier for subsequent delivery to the terminal requesting the bookmark.
  • the bookmark may include a unique bookmark identifier, and when the bookmark is stored, the web server may store the bookmark in association with the bookmark identifier for subsequent supply to the terminal requesting the bookmark.
  • Embodiments of the present invention are not limited to the examples of these specific storage methods, but may store bookmarks in any existing manner.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be continuously viewed according to the view angle indicated by the view information, thereby improving the viewing experience of the user watching the video.
  • FIG. 2 is a flow chart of a method of playing a video from a breakpoint in accordance with another embodiment of the present invention.
  • the method of Figure 2 is performed by a terminal device (e.g., the second terminal described above) that resumes the video.
  • the bookmark is created according to the instruction to set a breakpoint.
  • the second terminal may acquire a bookmark from a web server or obtain a bookmark from a first terminal that creates a bookmark.
  • the acquired bookmark may be a bookmark created by the first terminal in step 101 of FIG. To avoid repetition, it will not be described in detail.
  • the second terminal may acquire the bookmark by sending a request to the web server or the first terminal, or by receiving the push of the web server or the first terminal.
  • the second terminal can present a list of video programs to the user.
  • the second terminal may request the web server or the first terminal to issue the corresponding bookmark.
  • the embodiment of the present invention does not limit the manner in which the second terminal acquires the bookmark, and may request the bookmark from the network server or the first terminal according to any existing manner.
  • information such as a corresponding program identifier, user identifier, and/or bookmark identifier may be carried, so that the web server or the first terminal provides a corresponding bookmark according to the information.
  • the terminal device that sets the breakpoint may be the same device as the terminal device that obtains the bookmark, or may be two different devices.
  • the time information may indicate the playing time of the video when the breakpoint is set
  • the viewing angle information may indicate the viewing angle of the playing video when the breakpoint is set.
  • the viewing angle represented by the viewing angle information may be a virtual viewing angle corresponding to a real viewing angle of a real camera or a plurality of real viewing angles.
  • the viewing angle information may include a viewing angle identifier Viewld of the real viewing angle.
  • the perspective information may include an angle ViewAng le of the virtual perspective.
  • the perspective represented by the perspective information is a real perspective
  • the perspective information includes a perspective identifier of the real perspective
  • the terminal device may request to access the video stream corresponding to the perspective identifier of the real perspective.
  • the view represented by the view information is a virtual view
  • the terminal device can access the video stream according to the angle V i ewAng 1 e of the virtual view.
  • the embodiment of the present invention does not limit the manner in which the terminal device accesses the video stream. Can only be disconnected The video stream after the point can also access the video stream earlier than the breakpoint, but the terminal device plays the video stream from the breakpoint.
  • the terminal device may determine, according to the angle of view of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, ReferenceViewId 0, at least two reference perspective identifiers respectively.
  • the terminal device may determine, according to the angle of view of the virtual perspective, at least two reference perspective identifiers of the virtual perspective, ReferenceViewId 0, at least two reference perspective identifiers respectively.
  • the two reference perspective identifiers of the virtual perspective, ReferenceViewIdl and ReferenceViewId2 can take the values of Viewldl and Viewld2, respectively.
  • the terminal device requests access corresponding to the determined reference view identifier
  • the video stream of the ReferenceViewId, and the video stream corresponding to the reference view identifier Ref erenceViewId is synthesized into the video content corresponding to the virtual view, and the synthesized video content is played from the breakpoint.
  • the terminal device can directly obtain the reference view identifier corresponding to the angle of the virtual view from the view information.
  • the terminal device may obtain the reference view identifier corresponding to the angle of the virtual view through other information.
  • the terminal device can transmit the EPG (Electronic Program Guide) metadata delivered by the web server, and combine the perspective of the virtual perspective in the perspective information.
  • EPG Electronic Program Guide
  • V i ewAng le determines the angle of the virtual perspective V i ewAng 1 e corresponding reference angle identifier.
  • the terminal device may obtain the electronic program guide EPG metadata, where the EPG metadata carries the view description information of the multi-view video, where the view description information of the multi-view video may include, but is not limited to, the calibration data of the real camera corresponding to the real view. , or the correspondence between the real angle of view and the angle of the virtual perspective.
  • the terminal device determines the reference view identifier according to the calibration data of the real camera corresponding to the real view or the corresponding relationship between the real view angle and the angle of the virtual view angle, and according to the angle information of the virtual view.
  • the EPG can be delivered before the terminal device obtains the bookmark, for example, the web server can be requested on the terminal device.
  • the program is sent to the terminal device before the program.
  • the embodiment of the present invention carries the perspective information in the bookmark, so that the video can be continuously viewed according to the perspective represented by the perspective information, thereby improving the viewing experience of the user watching the video.
  • the terminal device continues to watch the video, the user does not need to select the viewing angle again, which improves the user's convenience.
  • FIG. 3 is a flow chart of a method of playing a video from a breakpoint in accordance with another embodiment of the present invention.
  • the method of FIG. 3 is executed by a web server or a first terminal that creates a bookmark, and corresponds to the methods of FIGS. 1 and 2, and thus, a detailed description will be appropriately omitted.
  • the bookmark carries time information of the video, and view information of the played video corresponding to the time information.
  • the bookmark requested in step 301 may be the bookmark created in step 102 of Fig. 1, and thus the description will not be repeated.
  • information such as a corresponding program identification, user identification, and/or bookmark identification may be carried, so that the web server provides a corresponding bookmark based on the information.
  • the viewing angle represented by the viewing angle information may be a virtual viewing angle corresponding to a real viewing angle of a real camera or a plurality of real viewing angles.
  • the perspective information may include, but is not limited to, a perspective identifier of the real perspective, so that the second terminal requests access to the perspective identifier Viewld corresponding to the real perspective. Stream the video and play the video stream from the breakpoint.
  • the perspective represented by the perspective information is a virtual perspective
  • the perspective information may include, but is not limited to, an angle V i ewAng 1 e of the virtual perspective, so that the second terminal is based on the angle of the virtual perspective.
  • the at least two references accessing a video stream corresponding to the at least two reference view identifiers; corresponding to the at least two reference view identifiers
  • the video stream is synthesized into video content corresponding to the virtual perspective, and the synthesized video content is played from the breakpoint.
  • the perspective represented by the perspective information is a virtual perspective
  • the perspective information may further include a reference perspective identifier corresponding to the virtual perspective.
  • the reference view identifier 0 may correspond to a view identifier for synthesizing a real view of the virtual view, so that the second terminal requests access to the video stream corresponding to the at least two reference view identifiers, and the at least The two reference views identify that the corresponding video stream is synthesized into video content corresponding to the virtual perspective, and the synthesized video content is played from the breakpoint.
  • the second terminal plays the video from the breakpoint indicated by the time information according to the perspective represented by the view information.
  • the first terminal and the second terminal may be the same terminal or different terminals, which is not limited by the embodiment of the present invention.
  • the EPG metadata generated by the network server may carry the perspective description information of the multi-view video, so that the second terminal describes the information according to the perspective of the multi-view video.
  • at least two reference perspective identifiers of the virtual perspective are determined from a perspective of the virtual perspective.
  • the view description information of the multi-view video may include, but is not limited to, the calibration data of the real camera corresponding to the real view, or the corresponding relationship between the real view and the angle of the virtual view.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be continuously viewed according to the view angle indicated by the view information, thereby improving the viewing experience of the user watching the video. Embodiments of the present invention will be described in more detail below with reference to specific examples.
  • FIG. 4 is a schematic flow chart of a process of playing a video from a breakpoint according to an embodiment of the present invention.
  • the embodiment of Figure 4 is an example of a web server storing bookmarks.
  • terminal A indicates a terminal device that sets a breakpoint (for example, as an example of the above-mentioned "first terminal")
  • terminal B indicates a terminal device that resumes video (for example, as the above-mentioned "second terminal”
  • An example of "Web Server” means a server device that stores bookmarks.
  • Terminal A and terminal B can be the same device or different devices.
  • the terminal A and the terminal B may belong to the same user, and may also belong to different users.
  • the network server sends the video content to the terminal A in a video stream manner.
  • the user of terminal A can select one or a set of optimal viewing angles.
  • terminal A may first receive a complete MVC video stream, and then obtain parameters such as camera calibration data from the MVC video stream.
  • the calibration parameters of the camera can be used to determine the angle identification, the shooting angle, and the like corresponding to the camera, and the terminal A generates an icon for selecting the angle of view (for example, a slide bar) according to the information, and a correspondence between the icon of the angle of view selection and the angle of view. Allows the user to drag on the slider to select the best viewing angle.
  • the optimal viewing angle selected by the user may correspond to the real viewing angle of the real camera or between the real viewing angles of the two real cameras (ie, the virtual viewing angle).
  • the user of terminal A gives an indication to set a breakpoint.
  • terminal A can create a bookmark according to the instruction of the set breakpoint, and record the current viewing information of the user.
  • the information in the bookmark can be mainly divided into two categories: One is the information that must be carried in the bookmark, including the program identifier ProgramId associated with the bookmark, the program time offset Off set (ie, the time information indicating the breakpoint); Necessary information such as the creator's username, user-entered comments, and more.
  • the information in the bookmark belongs to the prior art, so how the terminal obtains the information, and how the server and the terminal use the information in the later stages are not explained in detail and are particularly limited.
  • the bookmark information is also recorded in the bookmark for indicating the currently selected viewing angle of the user.
  • terminal A records the best view selected by the user and accesses the corresponding video stream according to the best view. Specifically, the terminal may obtain an optimal viewing angle selected by the user according to the correspondence between the icon of the selected viewing angle and the viewing angle. If the optimal view corresponds to the real camera, the terminal A records the view angle of the best view, Viewld, and adds the view angle of the best view to the bookmark when the bookmark is created, as the real information included in the view information.
  • the perspective of the perspective or, assuming that the user selects an optimal viewing angle and the optimal viewing angle is a virtual viewing angle, which is synthesized by two real viewing angles, the terminal A can record the angle ViewAng le of the virtual viewing angle, and increase the angle ViewAng le to the bookmark when creating the bookmark.
  • the angle ViewAng le of the virtual perspective can be used in various forms.
  • the angle ViewAng le may be a corresponding angle value, including various available unit forms, such as degrees or radians, or other forms, such as corresponding index values.
  • the terminal A may also record the perspective views of the two real perspectives of the synthesized virtual perspective, and add the perspective identifiers of the two real perspectives to the bookmark when the bookmark is created, as included in the perspective information.
  • the reference perspective of the virtual perspective For example, in the XML (Extensible Markup Language) format bookmark, you can add the following description to the bookmark:
  • Viewing Angle Viewpoints represented by Viewlnfo can include two types, real and virtual. It can be described as follows, where RealViewType represents the real view type and VirtualViewType represents the virtual view type.
  • the view information can carry the view angle of the real view.
  • the perspective view can carry the angle ViewAngle of the virtual view.
  • Terminal A uploads a bookmark to the web server.
  • the web server stores the received bookmark.
  • the bookmark may be stored in association with the program identification, the user identification, and/or the bookmark identification for subsequent delivery to the terminal requesting the bookmark.
  • Embodiments of the present invention are not limited to the examples of these specific storage methods, but the bookmarks may be stored in any existing manner, and thus the manner in which the web server stores bookmarks is not explained in detail and is particularly limited.
  • the terminal B requests the bookmark from the network server. For example, the user logs in to terminal A using his or her own user ID, watches the video program, and records the bookmark on terminal A. At the same time, the same user ID is used to log in to terminal B, and it is desirable to watch the same video on terminal B.
  • the user of the terminal A can recommend the video program to the user of the terminal B after recording the bookmark, and provide the terminal B with the information required to acquire the bookmark.
  • Terminal B requests a bookmark from the web server based on the information provided by terminal A. For example, terminal B can present a list of video programs to the user, and which programs have bookmarks.
  • terminal B Upon receiving an indication that the user continues to watch the program (e.g., the user clicks on the continue viewing button for the program), terminal B requests the corresponding bookmark from the web server.
  • the embodiment of the present invention does not limit the manner in which the bookmark is requested, and the bookmark can be requested from the web server in any existing manner.
  • the network server sends a bookmark to the terminal B according to the request of the terminal B.
  • the terminal B requests the network server to access the corresponding video stream according to the information recorded in the bookmark.
  • the terminal B may request the multi-view video stream according to the view point corresponding to the view point ID, and perform decoding and display.
  • the terminal B determines the reference view identifier, and then acquires the multi-view video stream according to the reference view identifier.
  • terminal B The camera calibration data can be obtained, and the reference perspective identifier is determined according to the calibration data and the angle of the virtual perspective.
  • the reference perspective identifies the perspective identifier corresponding to the real perspective.
  • the terminal B Before displaying the acquired video stream, the terminal B needs to obtain the camera parameters corresponding to the reference view identifier from the video stream to perform the virtual perspective synthesis, and display the synthesized video content. Specifically, the terminal B first needs to request access to a complete MVC video stream to obtain camera calibration data.
  • the calibration parameters of the camera in the MVC video stream are stored in the Supplemental Enhancement Information (SEI) message in the multi-view acquisition information (Mul t iv iew acqui siting informat ion SEI mes sage ) and associated with an IDR (Ins Tantaneous Decoding Refresh, decode instant refresh) AU (Acces s Uni t, access unit).
  • SEI Supplemental Enhancement Information
  • Terminal B calculates the position of the camera based on the calibration parameters.
  • the calibration data includes two parts: internal reference and external reference.
  • the internal parameters include the focal length center point to determine the internal geometric and optical characteristics of the camera, and the external parameters include the transformation matrix of the coordinate system to determine the three-dimensional position and orientation of the camera in a world coordinate system.
  • the reference angle of view corresponding to the virtual perspective is determined. For example, because the adjacent real view similarity is higher, the usual view synthesis can be based on two adjacent real views.
  • the method of determining the calibration data and determining the reference angle of view identification based on the calibration data will be described below with reference to specific examples. Therefore, the embodiment of the present invention carries the view information in the bookmark, so that the video can be viewed according to the view represented by the view information, thereby improving the viewing experience of the user watching the video.
  • the reference view identifier ReferenceViewId may also be recorded in the bookmark created by the terminal A, so that the terminal B can directly obtain the reference from the bookmark.
  • the view identifier identifies and only requests access to the video stream corresponding to the reference view identifier without receiving a complete MVC video stream, which can save the access bandwidth of the terminal B and improve the efficiency of data transmission.
  • the terminal A can also record the perspective views of the two real perspectives of the synthetic virtual perspective, and add the perspective identifiers of the two real perspectives to the bookmark when creating the bookmark, respectively, as the virtual included in the perspective information.
  • the two reference perspectives of the perspective identify the ReferenceViewId.
  • the terminal B requests the multi-view video stream according to the view angle identifier corresponding to the real camera or the reference view point of the virtual view, and then decodes the video stream to display the real view content, or obtains the video from the video stream.
  • the reference perspective identifies the corresponding video content, synthesizes the video content corresponding to the virtual perspective and displays the synthesized video content. Therefore, the embodiment of the present invention carries the perspective information in the bookmark, so that the video can be continuously viewed according to the perspective represented by the perspective information, thereby improving the viewing experience of the user watching the video.
  • the reference view identifier is directly carried in the view information, and the camera calibration parameter is obtained without parsing the video stream, which can reduce the amount of data interaction when the breakpoint is continued, and quickly access the user selected view angle.
  • FIG. 5 is a schematic flow chart of a process of playing video from a breakpoint according to another embodiment of the present invention.
  • “Terminal A” indicates the terminal device that sets the breakpoint
  • “Terminal B” indicates the terminal device that resumes the video
  • “Web Server” indicates the server device that stores the bookmark.
  • Terminal A and terminal B can be the same device or different devices.
  • FIG. 5 the same or similar steps as those of FIG. 4 are denoted by the same reference numerals, and thus the repeated description is appropriately omitted.
  • the embodiment of FIG. 5 is different from that of FIG. 4 in that if the view represented by the view information in the bookmark is a virtual view and the angle of view includes the angle of the virtual view, and does not include the reference view identifier, the web server may be in the EPG element.
  • the data carries the view description information of the multi-view video, so that the terminal B determines the reference view identifier of the virtual view according to the view description information of the multi-view video and the angle of the virtual view included in the bookmark.
  • the network server provides the EPG metadata to the terminal B, so that the terminal B performs video on demand.
  • the view description information of the multi-view video is carried.
  • Viewpoint description letter The information may be the calibration data corresponding to the camera of the angle of view, or may be the correspondence between the real angle of view and the virtual angle.
  • Table 1 is a specific implementation of the view description information carrying multi-view video in the EPG metadata.
  • Table 1 is an example of calibration data at the perspective of an XML element description of the EPG metadata added, which may be similar to the camera calibration data obtained from the MVC video stream in step 401.
  • Each Viewld in Viewld has a corresponding internal and external parameters.
  • the internal parameters may include a horizontal focal length, a vertical focal length, an origin horizontal coordinate, an origin vertical coordinate, a distortion factor, and the like, and the external reference may include a rotation matrix and a translation vector. From these data, the position and direction of the angle of view of each Viewld camera can be calculated. The calculation process is similar to that in the prior art, and therefore will not be described again.
  • the terminal B may calculate the position of the perspective of each camera according to the calibration data of the camera corresponding to the plurality of viewing angles described in the EPG metadata. Direction, so you can determine the beat of each camera Photo angle. Then, the terminal B can determine the reference view identifier corresponding to the viewing angle according to the angle ViewAng le of the virtual perspective carried in the bookmark.
  • V i ewl d 1 and V i ewl d2 can be used as the virtual angle of view V i ewAng
  • Table 2 is another specific implementation manner of the view description information carrying the multi-view video in the EPG metadata. Table 2 adds an XML element in the EPG metadata to describe the correspondence between the real view and the virtual view.
  • each ViewRelation includes a left view of the reference view, a right view of the reference view, a reference view, a minimum angle and a reference for synthesizing the virtual view.
  • the perspective can synthesize the maximum angle of the virtual perspective.
  • the parameters in Table 2 directly give the shooting angle of each camera.
  • the network server may calculate the parameters of Table 2 based on the parameters in Table 1.
  • the left view angle of the reference view is Viewldl
  • the right view of the reference view is Viewld2
  • the reference view can synthesize the minimum angle of the virtual view to 0
  • the reference view can synthesize the maximum angle 5 of the virtual view, indicating that the two views Viewldl and Viewld2 can be
  • the angle of the synthetic virtual perspective is 0-5.
  • the terminal B can determine the reference view identifier corresponding to the viewing angle according to the angle ViewAng le of the virtual perspective carried in the bookmark according to the correspondence between the real view and the virtual view in Table 2.
  • ViewAng le is between the shooting angles of two cameras (the corresponding view identifiers are Viewldl and Viewld2 respectively), that is, falling within the angle range of the virtual angles that the two cameras can synthesize
  • Viewldl and Viewld2 can be used.
  • "0" means that this parameter is an optional parameter ( Opt iona l ).
  • the various specific parameter names given in the above embodiments are merely exemplary, and do not limit the scope of the embodiments of the present invention, but may be adjusted according to actual needs. Such modifications are intended to fall within the scope of the embodiments of the invention.
  • Embodiments of the present invention provide a method for supporting a resume of a multi-view system.
  • the user can watch from the selected perspective when viewing the program next time, which continues the viewing experience of the user, and saves the trouble of re-switching the viewing angle to find the best viewing angle.
  • the perspective selected by the user is a virtual perspective
  • the reference perspective identifier of the virtual perspective is carried in the bookmark, or the perspective data of the calibration data or the correspondence for determining the reference perspective identifier is carried in the EPG metadata
  • the user can access only the best view selected before, without accessing the video stream of all views, reducing the amount of data interaction and saving bandwidth.
  • FIG. 6 is a block diagram of a terminal device in accordance with one embodiment of the present invention.
  • An example of the terminal device 60 of FIG. 6 is a terminal device (first terminal) that sets a breakpoint, such as the terminal A in FIGS. 4 and 5.
  • the terminal device 60 includes an instruction receiving unit 61, a bookmark creating unit 62, and a bookmark transmitting unit 63.
  • the indication receiving unit 61 receives an indication to set a breakpoint.
  • the bookmark creating unit 62 creates a bookmark according to the instruction received by the instruction receiving unit, the bookmark carrying time information of the video, and viewing angle information of the playing video corresponding to the time information.
  • the bookmark transfer unit 63 transmits the bookmark so that the second terminal plays the video from the breakpoint indicated by the time information according to the angle of view indicated by the view information.
  • the terminal device 60 and the second terminal are the same terminal or different terminals.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be viewed according to the angle of view represented by the view information, thereby improving the viewing experience of the user watching the video.
  • the bookmark transfer unit 63 can send the bookmark to the web server for storage so that the second terminal can acquire the bookmark from the web server. Alternatively, the bookmark transfer unit 63 may also directly send the bookmark to the second terminal.
  • the terminal device 60 can implement the operations of the terminal device (first terminal) involved in setting the breakpoint in the embodiments of FIGS. 1 to 5 described above.
  • the perspective information carried in the bookmark created by the bookmark creating unit 62 may include the perspective identifier of the real perspective.
  • the viewing angle represented by the viewing angle information is a virtual viewing angle synthesized with reference to a plurality of real viewing angles
  • the viewing angle information carried in the bookmark created by the bookmark creating unit 62 may include an angle of the virtual viewing angle.
  • the view information carried in the bookmark created by the bookmark creating unit 62 may further include at least two reference view identifiers corresponding to the angle of the virtual view. The at least two reference view identifiers respectively correspond to reducing the amount of data interaction.
  • terminal device 60 Other functions and operations of the terminal device 60 can be referred to the method embodiment above. To avoid repetition, the detailed description will not be repeated.
  • FIG. 7 is a block diagram of a terminal device in accordance with another embodiment of the present invention.
  • An example of the terminal device 70 of Fig. 7 is a terminal device (second terminal) that continuously broadcasts video, such as terminal B in Figs. 4 and 5.
  • the terminal device 70 includes an acquisition unit 71, a parsing unit 72, and a playback unit 73.
  • the acquisition unit 71 acquires a bookmark, which is created in accordance with an instruction to set a breakpoint.
  • the parsing unit 72 acquires time information of the video carried by the bookmark, and view information of the play video corresponding to the time information.
  • the playback unit 73 plays the video from the breakpoint indicated by the time information in accordance with the angle of view indicated by the view information.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be viewed according to the angle of view represented by the view information, thereby improving the viewing experience of the user watching the video.
  • the obtaining unit 71 can acquire a bookmark from a web server, or can acquire a bookmark directly from the first terminal that created the bookmark.
  • the terminal device 70 can implement the operations of the terminal device (second terminal) related to the continuous broadcast video in the embodiments of FIGS. 1 to 5 described above.
  • the view information may include the view identifier of the real view.
  • the playing unit 73 may request the view and the view of the real view. Corresponding video stream, And play the video stream from the breakpoint.
  • the viewing angle represented by the viewing angle information acquired by the parsing unit 72 is a virtual viewing angle synthesized by referring to a plurality of real viewing angles
  • the viewing angle information may include an angle of the virtual viewing angle
  • the playing unit 73 may be according to the angle of the virtual viewing angle. Determining at least two reference view identifiers of the virtual view (the at least two reference view identifiers respectively correspond to view angle identifiers for at least two real views for synthesizing the virtual view), requesting access to the video stream corresponding to the reference view identifier, and The video stream corresponding to the reference view identifier is synthesized into video content corresponding to the virtual view, and the synthesized video content is played from the breakpoint.
  • the playback unit 73 may obtain the reference view identifier corresponding to the virtual view from the view information, or the play unit 73 may acquire the EPG metadata (the EPG metadata carries the view description information of the multi-view video, where the view description information of the multi-view video) Including, but not limited to, calibration data of a real camera corresponding to a real perspective, or a correspondence relationship between a real perspective and an angle of the virtual perspective;), and a real camera calibration data or a real perspective and a virtual perspective according to the real perspective The corresponding relationship of the angles, and the reference view identifier is determined according to the angle information of the virtual perspective.
  • the view description information of the multi-view video may be the calibration data of Table 1 or the correspondence relationship of Table 2.
  • Other functions and operations of the terminal device 70 can be referred to the above method embodiments, and are not described in detail in order to avoid redundancy.
  • FIG. 8 is a block diagram of an apparatus for implementing playback of a video from a breakpoint, in accordance with one embodiment of the present invention.
  • An example of the apparatus 80 of Fig. 8 is the network server of Figs. 4 and 5, or the terminal device 60 of Fig. 6.
  • the device 80 includes a receiving unit 81 and a transmitting unit 82.
  • the receiving unit 81 receives a request for a bookmark by the second terminal, the bookmark carries time information of the video, and view information of the played video corresponding to the time information.
  • the transmitting unit 82 sends a bookmark to the second terminal, so that the second terminal plays the video from the breakpoint indicated by the time information according to the angle of view indicated by the view information.
  • the embodiment of the present invention carries the view information in the bookmark, so that the video can be continuously viewed according to the view angle indicated by the view information, thereby improving the viewing experience of the user watching the video.
  • Apparatus 80 may implement the operations of the network server involved in the above-described embodiments of Figures 1-5.
  • the viewing angle represented by the viewing angle information is a real viewing angle corresponding to the real camera
  • the perspective information carried in the bookmark sent by 82 may include a perspective identifier of a real perspective.
  • the view represented by the view information is a virtual view synthesized by referring to the plurality of real views
  • the view information carried in the bookmark sent by the sending unit 82 may include an angle of the virtual view.
  • the view information carried in the bookmark sent by the sending unit 82 may further include at least two reference view identifiers corresponding to the virtual view, where the at least two reference view identifiers respectively correspond to A perspective identifier for at least two real perspectives for synthesizing a virtual perspective.
  • the second terminal may request access to the video stream corresponding to the at least two reference view identifiers, and synthesize the video streams corresponding to the at least two reference view identifiers into video content corresponding to the virtual view, and play the synthesized from the breakpoint.
  • Video content so you don't need to access all video streams, you can save bandwidth.
  • FIG. 9 is a block diagram of an apparatus for implementing playback of a video from a breakpoint in accordance with another embodiment of the present invention.
  • the apparatus 90 of FIG. 9 is different from that of FIG. 8 in that a generating unit 83 and a storage unit 84 are further included, and a detailed description of the same or similar components as those in FIG. 8 will be omitted.
  • the generating unit 83 is configured to generate EPG metadata that carries the view description information of the multi-view video, so that the second terminal can determine the reference view identifier of the virtual view according to the view description information of the multi-view video and the angle of the virtual view.
  • the view description information of the multi-view video may include, but is not limited to, calibration data of a real camera corresponding to a real view, or a relationship between a real view and an angle of a virtual view.
  • the storage unit 84 can store bookmarks received from the first terminal.
  • the first terminal and the second terminal are the same terminal or different terminals.
  • the first terminal may create a bookmark upon receiving an indication to set a breakpoint, and send the bookmark to the web server 90, and then the storage unit 84 stores the bookmark so that the bookmark is sent by the transmitting unit 82 when the second terminal requests the bookmark. Give the second terminal.
  • the storage unit 84 when storing the bookmark, may store the bookmark in association with the program identification, the user identification, and/or the bookmark identification for subsequent supply to the terminal requesting the bookmark.
  • Embodiments of the present invention are not limited to the examples of these specific storage methods, but may store bookmarks in any existing manner.
  • Other functions and operations of the devices 80 and 90 can be referred to the method embodiment of FIG. 3 above, and are not described in detail to avoid redundancy.
  • the device 80 or 90 may be the above-described web server, in which case the device 80 or 90 may receive the bookmark sent by the first terminal and store the bookmark, and When the second terminal requests the bookmark, it is sent to the second terminal, so that the second terminal views the video according to the bookmark.
  • device 80 or 90 may be the first terminal, in which case device 80 or 90 sends the bookmark directly to the second terminal so that the second terminal can view the video based on the bookmark.
  • the first terminal and the second terminal may be the same terminal or different terminals.
  • a video playback system according to an embodiment of the present invention may include the above-described terminal devices 60, 70 or the above-described devices 80, 90.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the device for playing video from the breakpoint in the embodiment of the present invention may be any computer device, and the above method steps performed by the device for playing video from the breakpoint may be executed by the processor of the computer to play the video from the breakpoint.
  • the functional unit of the device may also be a functional unit running in a processor of the computer.
  • the foregoing terminal in the embodiment of the present invention may be any terminal device, such as a mobile phone, a PDA, a computer, a remote controller, etc.
  • the method for executing the terminal may be a processor of the terminal device.
  • the functional unit of the terminal can also be a processor running on the terminal device. Functional unit.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including A number of instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, Random Acces s Memory), a magnetic disk or an optical disk, and the like, which can store program codes. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本发明实施例提供一种从断点处播放视频的方法和设备。该方法包括:从网络服务器获取书签,书签根据设置断点的指示创建;获取书签携带的视频的时间信息,以及与时间信息对应的播放视频的视角信息;按照视角信息所表示的视角,从时间信息所表示的断点处播放视频。本发明实施例在书签中携带视角信息,从而能够按照视角信息所表示的视角观看视频,提升了用户观看视频的视角体验。

Description

从断点处播放视频的方法和设备
本申请要求于 2011年 09月 07 日提交中国专利局、 申请号为
201110264066. 8 , 发明名称为 "从断点处播放视频的方法和设备" 的中国 专利申请的优先权, 其全部内容通过引用结合在本申请中。
技术领域 本发明实施例涉及视频技术领域, 并且更具体地, 涉及从断点处播放 视频的方法和设备。 背景技术
常规二维视频都是单视角视频流, 通过一个摄像机捕获内容并编码生 成视频流进行存储或者传输。随着多媒体技术的发展,三维视频( 3D video , 3DV )得到了发展, 三维视频正从多视角视频向自由视点( free viewpoint ) 视频发展。
一个典型的多视角视频系统包括以下内容: 1.内容获取: 多摄像机阵列从不同视角同时捕捉同一场景, 生成多个 视角的视频序列。
2.编码: 由于多个视角之间的相似性, 相对于基本视角编码, 其他视 角编码时的预测结构都采用了视角间预测, 因此视角解码时需联合解码所 依赖的视角 (至少包括基本视角)。 MVC ( Mul t iview video coding , 多视 角视频编码 )码流由某个视角及其解码所依赖的视角的 NAL ( Network Abs tract ion Layer , 网络提取层) 包构成的子流构成一个操作点 (0P, Operat ion Point )。 OP可独立解码。
3.传输: 多视角带来了数据量的增加, 传输时不应该传输所有视角信 息,应只传输满足解码用户选择视角的视角信息。 MVC码流或者内容的接入 通过操作点 0P来描述, 在媒体描述信息中描述了 0P和目标输出视角的对 应关系, 一个 0P包含了可解码目标输出视角的所有媒体数据。 当用户选 择的视角和目标输出视角相同, 则按照对应的 0P接入就会获得满足解码该 用户选择视角的视角内容。 4.终端显示: 用户在观看视频时, 用户可以选择任意一个或者一组最 佳视角观看。 选择的最佳视角可以是真实摄像机对应的视角也可是参考真 实视角合成的虚拟视角。
此外, 断点续看是视频系统中的一个重要体验。 用户在观看节目过程 中可随时保存书签(bookmark ), 实现节目内容的 "跨屏断点续看"。 可以 在网络服务器中存储该书签, 书签中记录的信息确保用户之后在同一设备 或者其他设备上收看此节目时, 可精准定位断点续看, 省去拖拽时间进度 条寻找观看点的麻烦。
相对于二维视频, 在多视角视频系统中, 用户在观看视频时可选择一 个或者一组最佳视角进行观看。 但是, 如果用户在网络服务器上保存书签 并在下次按照该书签继续观看此节目时, 终端只能请求接入从观看点开始 的所有的多视角视频内容或者基本视角视频内容供用户观看, 用户需要重 新切换视角来寻找最佳视角, 造成用户视角体验的缺失, 并且增加了操作 的复杂度。 发明内容
本发明实施例提供一种从断点处播放视频的方法和设备, 能够提升用 户观看视频的视角体验。
一方面, 提供了一种从断点处播放视频的方法, 包括: 获取书签, 所 述书签根据设置断点的指示创建; 获取所述书签携带的视频的时间信息, 以及与所述时间信息对应的播放视频的视角信息; 按照所述视角信息所表 示的视角, 从所述时间信息所表示的断点处播放视频。 所述视角信息可包括对应于真实摄像机的真实视角的视角标识。 所述 按照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播放视 频, 包括: 请求接入与所述真实视角的视角标识对应的视频流; 从所述断 点处播放所述视频流。 所述视角信息可包括参考至少两个真实视角合成的虚拟视角的角度。 所述按照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播 放视频, 包括: 根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个 参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚拟 视角的至少两个真实视角的视角标识; 请求接入与所述至少两个参考视角 标识对应的视频流; 将所述与所述至少两个参考视角标识对应的视频流合 成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内 容。 所述根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个参考视 角标识, 可包括: 从所述视角信息获取与所述虚拟视角对应的至少两个参 考视角标识。 所述根据所述虚拟视角的角度信息, 确定所述虚拟视角的至少两个参 考视角标识, 可包括: 获取电子节目指南 EPG元数据, 所述 EPG元数据携 带多视角视频的视角描述信息, 其中所述多视角视频的视角描述信息包括 真实视角对应的真实摄像机的标定数据, 或者包括真实视角与所述虚拟视 角的角度的对应关系; 根据所述真实视角对应的真实摄像机的标定数据或 者真实视角与所述虚拟视角的角度的对应关系, 以及根据所述虚拟视角的 角度信息, 确定所述参考视角标识。 另一方面, 提供了一种从断点处播放视频的方法, 包括: 第一终端接 收设置断点的指示; 所述第一终端根据所述指示创建书签, 所述书签携带 视频的时间信息, 以及与所述时间信息对应的播放视频的视角信息; 所述 第一终端发送所述书签, 以便第二终端按照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播放视频, 所述第一终端与第二终端为同 一终端或者为不同终端。 所述视角信息可包括对应于真实摄像机的真实视角的视角标识, 以便 于第二终端请求接入与所述真实视角的视角标识对应的视频流; 并从所述 断点处播放所述视频流。 或者, 所述视角信息可包括参考至少两个真实视角合成的虚拟视角的 角度, 以便于第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的至 少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所 述虚拟视角的至少两个真实视角的视角标识; 请求接入与所述至少两个参 考视角标识对应的视频流; 并将所述与所述至少两个参考视角标识对应的 视频流合成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成 的视频内容。 如果所述视角信息所表示的视角为所述虚拟视角, 则所述视角信息还 可包括所述虚拟视角对应的至少两个参考视角标识, 所述至少两个参考视 识, 以便所述第二终端请求接入与所述至少两个参考视角标识对应的视频 流, 并将所述与所述至少两个参考视角标识对应的视频流合成为对应于所 述虚拟视角的视频内容, 从所述断点处播放所合成的视频内容。 另一方面, 提供了一种从断点处播放视频的方法, 包括: 接收第二终 端对书签的请求, 所述书签携带视频的时间信息, 以及与所述时间信息对 应的播放视频的视角信息; 向第二终端发送所述书签, 以便所述第二终端 按照所述视角信息所表示的视角, 从所述时间信息所表示的断点播放视频。 所述视角信息可包括对应于真实摄像机的真实视角的视角标识; 所述 以便所述第二终端按照所述视角信息所表示的视角, 从所述时间信息所表 示的断点处播放视频具体为: 第二终端请求接入与所述真实视角的视角标 识对应的视频流 , 并从所述断点处播放所述视频流。 或者, 所述视角信息可包括参考至少两个真实视角合成的虚拟视角的 角度; 所述以便所述第二终端按照所述视角信息所表示的视角, 从所述时 间信息所表示的断点处播放视频具体为: 第二终端根据所述虚拟视角的角 度, 确定所述虚拟视角的至少两个参考视角标识, 所述至少两个参考视角 请求接入与所述至少两个参考视角标识对应的视频流; 将所述与所述至少 两个参考视角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内容。 或者, 所述视角信息可包括所述虚拟视角的角度和所述虚拟视角对应 的至少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合 成所述虚拟视角的至少两个真实视角的视角标识, 所述以便所述第二终端 按照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播放视 频具体为: 所述第二终端请求接入与所述至少两个参考视角标识对应的视 频流, 并将所述与所述至少两个参考视角标识对应的视频流合成为对应于 所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内容。
如果所述视角信息包括参考至少两个真实视角合成的虚拟视角的角 度, 则所述方法还可包括: 生成携带多视角视频的视角描述信息的电子节 目指南 EPG元数据, 所述多视角视频的视角描述信息包括真实视角对应的 真实摄像机的标定数据, 或者包括真实视角与所述虚拟视角的角度的对应 关系, 以便所述第二终端根据真实视角对应的真实摄像机的标定数据或者 真实视角与所述虚拟视角的角度的对应关系, 以及根据所述虚拟视角的角 度信息, 确定所述参考视角标识。
所述方法还可包括: 存储从第一终端接收的所述书签。 另一方面, 提供了一种终端设备, 包括: 获取单元, 用于获取书签, 所述书签根据设置断点的指示创建; 解析单元, 用于获取所述书签携带的 的视频的时间信息, 以及与所述时间信息对应的播放视频的视角信息; 播 放单元, 用于按照所述视角信息所表示的视角, 从所述时间信息所表示的 断点处播放视频。
如果所述解析单元获取的视角信息包括对应于真实摄像机的真实视角 的视角标识, 则所述播放单元可请求接入与所述真实视角的视角标识对应 的视频流, 并从所述断点处播放所述视频流。
如果所述解析单元获取的视角信息包括参考至少两个真实视角合成的 虚拟视角的角度, 则所述播放单元可根据所述虚拟视角的角度, 确定所述 虚拟视角的至少两个参考视角标识, 所述至少两个参考视角标识分别对应 于所述至少两个参考视角标识的视频流, 并将所述对应于所述至少两个参 考视角标识的视频流合成为对应于所述虚拟视角的视频内容, 从所述断点 处播放所合成的视频内容。 所述播放单元可从所述视角信息获取与所述虚拟视角的角度对应的至 少两个参考视角标识。 或者, 所述播放单元可获取电子节目指南 EPG元数 据, 所述 EPG元数据携带多视角视频的视角描述信息, 其中所述多视角视 频的视角描述信息包括真实视角对应的真实摄像机的标定数据, 或者包括 真实视角与所述虚拟视角的角度的对应关系; 根据所述真实视角对应的真 实摄像机的标定数据或者真实视角与所述虚拟视角的角度的对应关系, 以 及根据所述虚拟视角的角度信息, 确定所述参考视角标识。
另一方面, 提供了一种终端设备, 包括: 指示接收单元, 用于接收设 置断点的指示; 书签创建单元, 用于根据所述指示接收单元所接收的指示 创建书签, 所述书签携带视频的时间信息, 以及与所述时间信息对应的播 放视频的视角信息; 书签传送单元, 用于发送所述书签, 以便第二终端按 照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播放视频, 所述终端设备和所述第二终端为同一终端或者为不同终端。 所述书签创建单元创建的书签中携带的视角信息可包括对应于真实摄 像机的真实视角的视角标识, 以便于第二终端请求接入与所述真实视角的 视角标识对应的视频流; 并从所述断点处播放所述视频流。
或者, 所述视角信息可包括参考至少两个真实视角合成的虚拟视角的 角度, 以便于第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的至 少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所 述虚拟视角的至少两个真实视角的视角标识; 请求接入与所述至少两个参 考视角标识对应的视频流; 并将所述与所述至少两个参考视角标识对应的 视频流合成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成 的视频内容。
或者, 所述视角信息可包括所述虚拟视角的角度和所述虚拟视角对应 的至少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合 成所述虚拟视角的至少两个真实视角的视角标识, 以便所述第二终端请求 接入与所述至少两个参考视角标识对应的视频流, 并将所述与所述至少两 个参考视角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从 所述断点处播放所合成的视频内容。 另一方面, 提供了一种实现从断点处播放视频的装置, 包括: 接收单 元, 用于接收第二终端对书签的请求, 所述书签携带视频的时间信息, 以 及与所述时间信息对应的播放视频的视角信息; 发送单元, 用于向第二终 端发送所述书签, 以便所述第二终端按照所述视角信息所表示的视角, 从 所述时间信息所表示的断点处播放视频。 所述发送单元发送的书签中携带的视角信息可包括对应于真实摄像机 的真实视角的视角标识, 以便所述第二终端请求接入与所述真实视角的视 角标识对应的视频流, 并从所述断点处播放所述视频流。 或者, 所述视角信息可包括参考至少两个真实视角合成的虚拟视角的 角度, 以便所述第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的 至少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成 所述虚拟视角的至少两个真实视角的视角标识; 请求接入与所述至少两个 参考视角标识对应的视频流; 将所述与所述至少两个参考视角标识对应的 视频流合成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成 的视频内容。 或者, 所述视角信息可包括所述虚拟视角的角度和所述虚拟视角对应 的至少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合 成所述虚拟视角的至少两个真实视角的视角标识, 以便所述第二终端请求 接入与所述至少两个参考视角标识对应的视频流, 并将所述与所述至少两 个参考视角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从 所述断点处播放所合成的视频内容。 所述装置还可包括生成单元, 用于生成携带多视角视频的视角描述信 息的电子节目指南 EPG元数据, 所述多视角视频的视角描述信息包括真实 视角对应的真实摄像机的标定数据, 或者包括真实视角与所述虚拟视角的 角度的对应关系, 以便所述第二终端根据真实视角对应的真实摄像机的标 定数据或者真实视角与所述虚拟视角的角度的对应关系, 以及根据所述虚 拟视角的角度信息, 确定所述参考视角标识。 所述装置还可包括存储单元, 用于存储从第一终端接收的所述书签。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角观看视频, 提升了用户观看视频的视角体验。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对实施例或现有 技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述中的 附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1是本发明一个实施例的从断点处播放视频的方法的流程图。
图 2是本发明另一实施例的从断点处播放视频的方法的流程图。 图 3是本发明另一实施例的从断点处播放视频的方法的流程图。 图 4是本发明一个实施例的从断点处播放视频过程的示意流程图。 图 5是本发明另一实施例从断点处播放视频过程的示意流程图。
图 6是本发明一个实施例的终端设备的框图。 图 7是本发明另一实施例的终端设备的框图。 图 8是本发明一个实施例的实现从断点处播放视频的装置的框图。 图 9是本发明另一实施例的实现从断点处播放视频的装置的框图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进 行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而 不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有 作出创造性劳动前提下所获得的所有其他实施例 , 都属于本发明保护的范 围。
图 1是本发明一个实施例的从断点处播放视频的方法的流程图。 图 1 的方法可以由设置断点的终端设备(下文中称为 "第一终端")执行。 101 , 第一终端接收设置断点的指示。 例如, 所述指示可根据用户的按键操作产生, 例如用户直接从菜单中 选择创建书签。 或者, 所述指示可以由系统根据其他触发条件产生, 例如 系统可以在用户进行退出视频操作时自行创建书签。 本发明实施例对所述 指示不作限制, 可以采用任何现有的指示方式。
102 , 第一终端根据所述指示创建书签, 该书签携带视频的时间信息, 以及与所述时间信息对应的播放视频的视角信息。 可选地, 作为一个实施例, 时间信息可表示断点设置时视频的播放时 间, 视角信息可表示播放视频在设置断点时的观看视角。 视角信息所表示 的视角可以是对应于真实摄像机的真实视角或参考多个真实视角合成的虚 拟视角。 用户在播放视频时可选择任意一个或者一组最佳视角进行观看。 用户所选的视角可能是真实视角, 也可能是通过多个真实视角合成的虚拟 视角。 第一终端可记录用户选择的视角的信息, 并在书签中携带该视角的 信息, 即所述视角信息。 在一个可选的例子中, 如果视角信息所表示的视角为真实视角, 则视 角信息可包括真实视角的视角标识(例如, 记为 Viewld )。 或者, 如果视角 信息所表示的视角为虚拟视角, 则视角信息可包括虚拟视角的角度(例如, 记为 ViewAng le )。 进一步, 在另一个可选的例子中, 如果视角信息所表示的视角为虚拟 视角, 则视角信息还可以包括虚拟视角对应的参考视角标识(例如, 记为 ReferenceViewId )。 参考视角标识可对应于用于合成该虚拟视角的真实视 角的视角标识。
103 , 第一终端发送书签, 以便第二终端按照视角信息所表示的视角, 从时间信息所表示的断点处播放视频。 第一终端与第二终端可以是同一终端或者为不同终端, 本发明实施例 对此不作限制。 例如, 第一终端和第二终端可以是属于同一个用户的两个 终端, 该用户能够在两个终端上观看同样的视频内容, 例如用户在第一终 端观看视频, 因为一些原因退出视频, 下次观看时可根据书签中记录的信 息在第一终端或者第二终端上观看。 或第一终端和第二终端可以是分属于 不同用户的终端, 第一终端可以在认为有趣的内容处设置断点, 让其它的 用户观看。 或者, 第一终端和第二终端是同一个终端, 同一个用户或不同 的用户可以选择在该终端上观看同样的视频内容。
第一终端可将创建的书签上传并存储至网络服务器。 由于书签中记录 了与表示断点的时间信息对应的视角信息, 因此用户在下次观看视频时(在 同一终端或不同终端上观看均可), 能够从网络服务器获取书签并按照书签 中视角信息所表示的视角续看视频。
或者, 第一终端也可以直接将书签发送给第二终端。 例如, 第一终端 的用户想要和第二终端的用户分享视频内容, 则可以向第二终端的用户推 荐该书签, 第二终端获取书签并按照书签中视角信息所表示的视角续看视 频。
书签中还可以包括相应的节目标识。 网络服务器在存储书签时, 可以 与节目标识相关联地存储书签, 以便后续提供给请求书签的终端。 或者, 书签中还可以包括记录书签的终端的用户标识, 网络服务器在存储书签时, 可以与用户标识相关联地存储书签, 以便后续提供给请求书签的终端。 可 替换地, 书签中可以包括唯一的书签标识, 网络服务器在存储书签时, 可 以与书签标识相关联地存储书签, 以便后续提供给请求书签的终端。 本发 明实施例不限于这些具体的存储方式的例子, 而是可以按照任何现有方式 存储书签。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角续看视频, 提升了用户观看视频的视角体验。
图 2是本发明另一实施例的从断点处播放视频的方法的流程图。 图 2 的方法由续播视频的终端设备(例如, 上述第二终端)执行。
201 , 获取书签, 该书签根据设置断点的指示创建。 例如, 第二终端可以从网络服务器获取书签, 或者从创建书签的第一 终端获取书签。 所获取的书签可以是第一终端在图 1的步骤 101中创建的 书签。 为避免重复, 不再详细描述。 第二终端可通过向网络服务器或第一终端发送请求获取书签, 或通过 接收网络服务器或第一终端的推送等方式获取书签。 例如, 第二终端可以 向用户展示视频节目列表。 当接收到用户续看节目的指示(例如, 用户点 击该节目的继续观看按钮) 时, 第二终端可以请求网络服务器或第一终端 下发对应的书签。 本发明实施例对第二终端获取书签的方式不作限制, 可 以按照任何现有方式从网络服务器或第一终端请求书签。 例如, 在对书签的请求中, 可携带相应的节目标识、 用户标识和 /或书 签标识等信息, 以便网络服务器或第一终端根据该信息提供相应的书签。
这里, 设置断点的终端设备可以和获取书签的终端设备为同一设备, 也可以是不同的两个设备。
202 , 获取书签携带的视频的时间信息, 以及与所述时间信息对应的播 放视频的视角信息。
如图 1的实施例所述, 时间信息可表示断点设置时视频的播放时间, 视角信息可表示播放视频在设置断点时的观看视角。 例如, 视角信息所表 示的视角可以是对应于真实摄像机的真实视角或参考多个真实视角合成的 虚拟视角。
在一个可选的例子中, 如果视角信息所表示的视角为真实视角, 则视 角信息可包括真实视角的视角标识 Viewld。 或者, 如果视角信息所表示的 视角为虚拟视角, 则视角信息可包括虚拟视角的角度 ViewAng le。
203 , 按照视角信息所表示的视角, 从时间信息所表示的断点处播放视 频。
可选地, 作为一个实施例, 如果视角信息所表示的视角为真实视角, 则视角信息中会包括真实视角的视角标识 Viewld, 终端设备可请求接入与 真实视角的视角标识 Viewld对应的视频流。 另一方面, 如果视角信息所表示的视角为虚拟视角, 则终端设备可根 据虚拟视角的角度 V i ewAng 1 e接入视频流。 本发明实施例对终端设备接入视频流的方式不做限制。 可仅仅接入断 点之后的视频流, 也可以接入比断点更早的视频流, 但是终端设备均从断 点处播放该视频流。 可选地, 作为一个具体的实施例, 在接入和播放视频流时, 终端设备 可根据虚拟视角的角度 ViewAng le,确定虚拟视角的至少两个参考视角标识 ReferenceViewId0 至少两个参考视角标识分别对应于用于合成虚拟视角的 至少两个真实视角的视角标识。 举例来说, 假设以两个真实视角 (分别标 识为 Viewldl和 Viewld2 )合成一个虚拟视角的情况为例, 则该虚拟视角的 两个参考视角标识 ReferenceViewIdl和 Ref erenceViewId2可分别采用 Viewldl和 Viewld2的值。 然后, 终端设备请求接入对应于所确定的参考视角标识
ReferenceViewId的视频流,并将对应于参考视角标识 Ref erenceViewId的 视频流合成为对应于虚拟视角的视频内容, 从断点处播放所合成的视频内 容。
例如, 当视角信息中包括参考视角标识 ReferenceViewId时, 终端设 备可以直接从视角信息获取虚拟视角的角度对应的参考视角标识。
另外, 如果视角信息中不包括参考视角标识 ReferenceViewId, 终端设 备也可以通过其他信息获得虚拟视角的角度对应的参考视角标识。 在一个 可选的例子中,终端设备可通过网络服务器下发的 EPG( E lectronic Program Guide, 电子节目指南)元数据, 结合视角信息中的虚拟视角的角度
V i ewAng le, 确定虚拟视角的角度 V i ewAng 1 e对应的参考视角标识。 具体地, 终端设备可获取电子节目指南 EPG元数据, 该 EPG元数据携 带多视角视频的视角描述信息, 其中, 多视角视频的视角描述信息可包括 但不限于真实视角对应的真实摄像机的标定数据, 或者真实视角与虚拟视 角的角度的对应关系。 然后终端设备根据所述真实视角对应的真实摄像机 的标定数据或者真实视角与所述虚拟视角的角度的对应关系, 以及根据所 述虚拟视角的角度信息, 确定所述参考视角标识。
这里, EPG和书签的获取顺序不对本发明实施例的范围构成限制。 EPG 可以在终端设备获取书签之前下发, 例如网络服务器可以在终端设备点播 节目之前下发给该终端设备。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角续看视频, 提升了用户观看视频的视角体验。 另外, 终端设备在续看视频时, 无需用户再次选择观看视角, 提高了 用户的便利度。 此外, 终端设备无需接入所有视频流, 而只需接入与真实 视角或虚拟视角相关的视频流, 能够节省带宽, 提高系统效率。 图 3是本发明另一实施例的从断点处播放视频的方法的流程图。 图 3 的方法由网络服务器或创建书签的第一终端执行, 并且与图 1和图 2的方 法相对应, 因此, 将适当省略详细的描述。
301 , 接收第二终端对书签的请求, 所述书签携带视频的时间信息, 以 及与所述时间信息对应的播放视频的视角信息。 例如, 在步骤 301中请求的书签可以是在图 1的步骤 102中创建的书 签, 因此不再重复描述。 例如, 在对书签的请求中, 可携带相应的节目标识、 用户标识和 /或书 签标识等信息, 以便网络服务器根据该信息提供相应的书签。 例如, 视角信息所表示的视角可以是对应于真实摄像机的真实视角或 参考多个真实视角合成的虚拟视角。 在一个可选的例子中, 如果视角信息所表示的视角为真实视角, 则视 角信息可包括但不限于真实视角的视角标识 Viewld, 以便第二终端请求接 入与真实视角的视角标识 Viewld对应的视频流, 并从断点处播放视频流。 或者, 如果视角信息所表示的视角为虚拟视角, 则视角信息可包括但不限 于虚拟视角的角度 V i ewAng 1 e , 以便第二终端根据虚拟视角的角度
ViewAng le, 确定虚拟视角的至少两个参考视角标识, 所述至少两个参考视 求接入与所述至少两个参考视角标识对应的视频流; 将与所述至少两个参 考视角标识对应的视频流合成为对应于虚拟视角的视频内容, 从断点处播 放所合成的视频内容。 进一步, 在另一个可选的例子中, 如果视角信息所表示的视角为虚拟 视角, 则视角信息还可以包括虚拟视角对应的参考视角标识
ReferenceViewId0 参考视角标识可对应于用于合成该虚拟视角的真实视角 的视角标识, 以便第二终端请求接入与所述至少两个参考视角标识对应的 视频流, 并将所述与所述至少两个参考视角标识对应的视频流合成为对应 于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内容。
302 ,向第二终端发送书签,以便第二终端按照视角信息所表示的视角, 从时间信息所表示的断点处播放视频。 上述第一终端与第二终端可以是同一终端或者为不同终端, 本发明实 施例对此不作限制。 可选地, 作为一个实施例, 如果视角信息所表示的视角为虚拟视角, 则网络服务器生成的 EPG元数据可携带多视角视频的视角描述信息, 以便 第二终端根据多视角视频的视角描述信息和虚拟视角的角度, 确定虚拟视 角的至少两个参考视角标识。 其中, 多视角视频的视角描述信息可包括但 不限于真实视角对应的真实摄像机的标定数据, 或者真实视角与虚拟视角 的角度的对应关系。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角续看视频, 提升了用户观看视频的视角体验。 下面结合具体例子, 更加详细的描述本发明的实施例。
图 4是本发明一个实施例的从断点处播放视频过程的示意流程图。图 4 的实施例是网络服务器存储书签的例子。 图 4中, "终端 A" 表示设置断点 的终端设备(例如, 作为上述 "第一终端" 的一个例子), "终端 B"表示续 播视频的终端设备(例如, 作为上述 "第二终端" 的一个例子), "网络服 务器" 表示存储书签的服务端设备。 终端 A和终端 B可以是同一设备, 也 可以是不同的设备。 另外, 终端 A和终端 B可属于同一用户, 也可以属于 不同的用户, 本发明实施例对此不作限制。
401 , 网络服务器以视频流的方式, 向终端 A发送视频内容。 终端 A的 用户可选择一个或一组最佳视角。 在一个非限制性的具体例子中, 终端 A可先接收一段完整的 MVC视频 流, 然后从 MVC视频流获取摄像机标定数据等参数。 摄像机的标定参数可 用于确定摄像机对应的角度标识、 拍摄角度等信息, 终端 A根据这些信息 生成视角选择的图示(例如滑动条), 以及视角选择的图示与视角的对应关 系。 使得用户能在滑动条上拖动以选择最佳视角。 用户选择的最佳视角可 能对应于真实摄像机的真实视角, 也可能位于两个真实摄像机的真实视角 之间 (即, 虚拟视角)。
402 , 终端 A的用户给出设置断点的指示。 此时终端 A可根据该设置断 点的指示, 创建书签(bookmark ), 记录用户当前的观看信息。 书签中的信息主要可分为两类: 一类是书签中必须携带的信息, 包括 与书签关联的节目标识 ProgramId、 节目时间偏移 Off set (即表示断点的 时间信息); 另一类是非必要信息如创建者的用户名、 用户输入的注释等。 书签中的这些信息属于现有技术, 因此终端怎么获取这些信息, 以及后期 服务器与终端怎么使用这些信息本发明均不做详细的解释和特别的限定。 在本发明实施例中, 书签中还记录视角信息, 用于指示用户当前所选 择的视角。
例如, 终端 A会记录用户选择的最佳视角, 并根据最佳视角接入对应 的视频流。 具体地, 终端可以根据选择视角的图示与视角的对应关系, 获取用户 选择的最佳视角。 如果该最佳视角对应于真实摄像机, 则终端 A记录该最 佳视角的视角标识 Viewld, 并在创建书签时将该最佳视角的视角标识 Viewld增加到书签中, 作为视角信息中所包括的真实视角的视角标识。 或者, 假设用户选择一个最佳视角并且该最佳视角是虚拟视角, 由两 个真实视角合成, 则终端 A可记录这个虚拟视角的角度 ViewAng le, 并在创 建书签时将该角度 ViewAng le增加到书签中, 作为视角信息中所包括的虚 拟视角的角度。 应注意, 本发明实施例中, 虚拟视角的角度 ViewAng le可 以使用各种形式。 例如, 该角度 ViewAng le可以是相应的角度值, 包括各 种可用的单位形式, 如度或弧度, 也可以是其他形式, 例如对应的索引值 等, 本发明实施例对此不作限制。 进一步地 , 如有必要 , 终端 A还可以记录合成虚拟视角的两个真实视 角的视角标识, 并在创建书签时将这两个真实视角的视角标识增加到书签 中, 作为视角信息中所包括的虚拟视角的参考视角标识。 以 XML ( Extensible Markup Language, 可扩展标记语言 )格式的书签 为例, 可以在书签中增加如下描述:
<xs: element name=" Viewlnf o" type=" tns: Viewlnf oType"
minOccurs=" 0" /> 其中 Viewlnfo表示视角信息。 视角信息 Viewlnfo所表示的视角可包 括两种类型, 即真实视角和虚拟视角。 可描述如下, 其中 RealViewType表 示真实视角类型, VirtualViewType表示虚拟视角类型。
<xs: com lexType name=" Viewlnf oType">
<xs: sequence>
<xs: choice>
<xs: element name="RealViewInf o" type="tns: RealViewType" min0ccurs=" 0" maxOccur s=" unbounded "/>
<xs: element name=" Virtua 1 Viewlnf o" type=" tns: Vir tualViewType" min0ccurs=" 0"
maxOccur s=" unbounded "/>
</ xs: choice>
</ xs: sequence>
</ xs: complexType>
对于真实视角类型 RealViewType, 视角信息中可携带该真实视角的视 角标识 Viewld。
<xs: complexType name= " Rea 1 V i ewTy e " >
<xs: sequence>
<xs: element name=" Viewld" type="xs: string" min0ccurs=" 0"/> </ xs: sequence>
</xs: complexType>
对于虚拟视角类型 VirtualViewType,视角信息中可携带该虚拟视角的 角度 ViewAngle。
<xs: complexType name= " V i r t ua 1 V i ewTy e " >
<xs: sequence>
<xs: element name= " V i ewAng 1 e " type="xs: decimal " min0ccurs=" 0" />
</ xs : sequence>
</x s : comp l exType>
403 , 终端 A向网络服务器上传书签。
404 , 网络服务器存储所接收的书签。 网络服务器在存储书签时, 可以与节目标识、 用户标识和 /或书签标识 相关联地存储书签, 以便后续提供给请求书签的终端。 本发明实施例不限 于这些具体的存储方式的例子, 而是可以按照任何现有方式存储书签, 因 此不对网络服务器存储书签的方式进行详细的解释和特别的限定。
405 , 如果终端 B的用户希望观看视频, 则终端 B向网络服务器请求书 签。 例如, 用户使用自己的用户标识登录终端 A , 观看视频节目, 并且在终 端 A上记录书签。在之后某一时间使用同样的用户标识登录终端 B , 希望在 终端 B上观看同一视频。 或者, 终端 A的用户可在记录书签之后向终端 B 的用户推荐该视频节目, 并向终端 B提供获取书签所需的信息。 终端 B根 据终端 A提供的信息, 向网络服务器请求书签。 例如, 终端 B可以向用户展示视频节目列表, 以及哪些节目具有书签。 当接收到用户续看节目的指示(例如, 用户点击该节目的继续观看按钮) 时, 终端 B会向网络服务器请求对应的书签。 本发明实施例对请求书签的 方式不作限制, 可以按照任何现有方式从网络服务器请求书签。
406 , 网络服务器根据终端 B的请求, 向终端 B 下发书签。
407 , 终端 B根据书签中记录的信息, 向网络服务器请求接入相应的视 频流。 当书签中携带的视角信息包括真实视角对应的视角标识时, 终端 B可 根据视角标识对应的操作点 0P请求多视角视频流, 并进行解码和显示。 另一方面, 当书签中携带的视角信息包括虚拟视角的角度时, 终端 B 确定参考视角标识, 再根据参考视角标识获取多视角视频流。 例如终端 B 可获取摄像机标定数据, 根据该标定数据和虚拟视角的角度, 确定参考视 角标识。 参考视角标识对应于真实视角的视角标识。 终端 B在显示所获取 的视频流之前, 需从视频流中获取参考视角标识对应的摄像机参数以便进 行虚拟视角的合成, 并显示合成的视频内容。 具体地, 终端 B首先需请求接入一段完整的 MVC视频流以获取摄像机 标定数据。 MVC视频流中摄像机的标定参数存放在多视角获取信息中的补充 增强信息 ( Supplementa l Enhancement Informat ion, SEI )消息 ( Mul t iv iew acqui s i t ion informat ion SEI mes sage )中且关联一个 IDR ( Ins tantaneous Decoding Refresh, 解码即时刷新) AU ( Acces s Uni t , 访问单元)。 然后 终端 B根据标定参数计算摄像机的位置。 标定数据包括内参和外参两部分。 内参包括焦距中心点等确定摄像机内部几何和光学特性, 外参包括坐标系 的转换矩阵确定摄像机在一个世界坐标系中的三维位置和方向。 在计算得 到摄像机位置之后, 确定虚拟视角对应的参考视角标识。 例如, 因为相邻 的真实视角相似度较高, 所以通常的视角合成可基于相邻的两个真实视角。 下面还将结合具体例子描述标定数据和根据标定数据确定参考视角标识的 方法。 因此, 本发明实施例在书签中携带视角信息, 从而能够按照视角信息 所表示的视角观看视频, 提升了用户观看视频的视角体验。
上面描述了视角信息中包括虚拟视角的角度的例子。 可选地, 在图 4 的实施例中, 如果视角信息为虚拟视角, 则在步骤 402中, 终端 A创建的 书签中还可记录参考视角标识 ReferenceViewId,以便于终端 B能直接从书 签中获取参考视角标识, 并仅仅请求接入与参考视角标识对应的视频流, 而无需接收一段完整的 MVC视频流, 这样能够节省终端 B的接入带宽, 提 高数据传输的效率。
如有必要, 终端 A还可以记录合成虚拟视角的两个真实视角的视角标 识, 并在创建书签时将这两个真实视角的视角标识增加到书签中, 分别作 为视角信息中所包括的该虚拟视角的两个参考视角标识 ReferenceViewId。 在此情况下, 对于虚拟视角类型 Vi r tua lViewType ,视角信息中可携带 该虚拟视角的角度 V i ewAng 1 e和对应的参考视角标识 Re f e r enc e V i ewl d。 <xs: complexType name= " V i r t ua 1 V i ewTy e " >
<xs: sequence>
<xs: element name= " V i ewAng 1 e " type=" xs: dec ima l " minOccurs=" 0 " / >
<xs: element name=" Ref erenceViewId" type=" xs: s tr ing " minOccurs=" 0" maxOccur s=" unbounded " />
</ xs: sequence>
</xs: comp lexType> 书签中其他参数的描述可以与上面的例子相同, 因此不再重复。
然后, 在步骤 407中, 终端 B根据真实摄像机对应的视角标识或者虚 拟视角的参考视角标识对应的操作点 0P请求多视角视频流, 然后解码视频 流以显示真实视角内容, 或者从视频流中获取参考视角标识对应的视频内 容, 合成对应于虚拟视角的视频内容并显示合成的视频内容。 因此, 本发明实施例在书签中携带视角信息, 从而能够按照视角信息 所表示的视角续看视频, 提升了用户观看视频的视角体验。 同时, 在视角 信息中直接携带参考视角标识, 无需解析视频流获得摄像机标定参数, 能 够减少断点续看时的数据交互量, 快速接入用户选择的视角。
图 5是本发明另一实施例从断点处播放视频过程的示意流程图。 图 5 中, "终端 A" 表示设置断点的终端设备, "终端 B" 表示续播视频的终端设 备, "网络服务器 " 表示存储书签的服务端设备。 终端 A和终端 B可以是同 一设备, 也可以是不同的设备。
图 5中, 与图 4相同或相似的步骤使用相同的附图标记表示, 因此适 当省略重复的描述。 图 5的实施例与图 4的不同之处在于, 如果书签中视 角信息所表示的视角为虚拟视角并且视角信息中包括虚拟视角的角度, 而 不包括参考视角标识, 则网络服务器可以在 EPG元数据中携带多视角视频 的视角描述信息, 以便终端 B根据多视角视频的视角描述信息和书签中包 括的虚拟视角的角度, 确定虚拟视角的参考视角标识。
405a , 网络服务器向终端 B提供 EPG元数据, 以便终端 B进行视频点 播。 在该 EPG元数据中, 携带多视角视频的视角描述信息。 该视角描述信 息可以是视角对应摄像机的标定数据, 也可以是描述真实视角和虚拟角度 的对应关系。
下面以 XML形式的 EPG元数据为例。 表 1是在 EPG元数据中携带多视角视频的视角描述信息的一个具体实 现方式。 表 1是在 EPG元数据增加的 XML元素描述视角的标定数据的例子, 这些标定数据可以类似于在步骤 401中从 MVC视频流中得到的摄像机标定 数据。
表 1 EPG元数据中携带摄像机标定数据的视角信息
Figure imgf000021_0001
表 1中, 0-N个(N为正整数)视角标识 Viewld中的每个 Viewld分别 具有相应的内参和外参。 内参可包括水平焦距、 垂直焦距、 原点水平坐标、 原点垂直坐标、 扭曲因子等, 外参可包括旋转矩阵和平移矢量等。 从这些 数据可以计算出各个 Viewld的摄像机的视角的位置和方向, 计算过程与现 有技术中相似, 因此不再贅述。
在步骤 407中, 由于书签中标识了视角信息, 当观看视角是虚拟视角 时, 终端 B可根据 EPG元数据中描述的多个视角对应摄像机的标定数据, 计算得出各个摄像机的视角的位置和方向, 因此可以确定各个摄像机的拍 摄角度。 然后终端 B可根据书签中携带的虚拟视角的角度 ViewAng le确定 观看视角对应的参考视角标识。 例如, 假设 ViewAng le介于某两个摄像机 (对应的视角标识分别为 Viewldl和 Viewld2 )的拍摄角度之间, 则可以将 V i ewl d 1和 V i ewl d2的值作为该虚拟视角 V i ewAng 1 e对应的两个参考视角标 识的值。 表 2是在 EPG元数据中携带多视角视频的视角描述信息另一具体实现 方式。 表 2在 EPG元数据增加 XML元素描述真实视角和虚拟视角的对应关 系。
表 2 EPG元数据中携带真实视角和虚拟视角的对应关系
Figure imgf000022_0001
表 2中, 0-N个(N为正整数)视角关系 ViewRe lat ion中, 每个 ViewRe lat ion包括参考视角的左视角、 参考视角的右视角、 参考视角可合 成虚拟视角的最小角度和参考视角可合成虚拟视角的最大角度。 表 2中的 参数直接给出了各个摄像机的拍摄角度。 在一个实施例中, 网络服务器可 根据表 1中的参数计算得到表 2的参数。 例如, 参考视角的左视角为 Viewldl , 参考视角的右视角为 Viewld2 , 参考视角可合成虚拟视角的最小 角度为 0,参考视角可合成虚拟视角的最大角度 5 ,表示这两个视角 Viewldl 和 Viewld2可合成虚拟视角的角度范围为 0-5。 类似地, 在步骤 407中, 终端 B可根据表 2中真实视角和虚拟视角的 对应关系, 根据书签中携带的虚拟视角的角度 ViewAng le确定观看视角对 应的参考视角标识。 例如, 假设 ViewAng le介于某两个摄像机(对应的视 角标识分别为 Viewldl和 Viewld2 )的拍摄角度之间, 即落入两个摄像机可 合成的虚拟视角的角度范围内, 则可以将 Viewldl和 Viewld2的值作为该 虚拟视角 ViewAng le对应的两个参考视角标识的值。 表 1和表 2中, "0" 代表该参数是可选参数 ( Opt iona l )。 另外, 上面的实施例中给出的各种具体参数名仅仅是示例性的, 不对 本发明实施例的范围构成限制, 而是可以根据实际需要进行调整。 这样的 修改均落入本发明实施例的范围内。 本发明实施例提供了支持多视角系统的断点续看的方法。 通过在书签 中记录用户选择的视角信息, 使得用户在下次观看此节目时, 可以从选择 的视角进行观看, 延续了用户的视角体验, 省去了重新切换视角寻找最佳 视角麻烦。 同时, 在用户选择的视角为虚拟视角的情况下, 如果在书签中 携带虚拟视角的参考视角标识, 或者在 EPG元数据中携带用于确定参考视 角标识的标定数据或对应关系等视角信息, 则用户在下次观看此节目时, 能够仅仅接入之前选择的最佳视角进行观看, 无需接入所有视角的视频流, 减少了数据交互量, 节省了带宽。
图 6是本发明一个实施例的终端设备的框图。 图 6的终端设备 60的一 个例子是设置断点的终端设备 (第一终端), 例如图 4和图 5中的终端 A。 终端设备 60包括指示接收单元 61、 书签创建单元 62和书签传送单元 63。 指示接收单元 61接收设置断点的指示。 书签创建单元 62根据所述指 示接收单元所接收的指示创建书签, 所述书签携带视频的时间信息, 以及 与所述时间信息对应的播放视频的视角信息。 书签传送单元 63发送所述书 签, 以便第二终端按照所述视角信息所表示的视角, 从所述时间信息所表 示的断点处播放视频。 终端设备 60和所述第二终端为同一终端或者为不同 终端。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角观看视频, 提升了用户观看视频的视角体验。 书签传送单元 63可以将书签发送至网络服务器存储, 以便第二终端从 网络服务器获取书签。 或者, 书签传送单元 63也可以直接将书签发送至第 二终端。 终端设备 60可实现上述图 1-图 5的实施例中涉及设置断点的终端设备 (第一终端) 的操作。 例如, 如果视角信息所表示的视角为对应于真实摄 像机的真实视角, 则书签创建单元 62创建的书签中携带的视角信息可包括 真实视角的视角标识。 或者, 如果视角信息所表示的视角为参考多个真实 视角合成的虚拟视角, 则书签创建单元 62创建的书签中携带的视角信息可 包括虚拟视角的角度。 可选地, 作为另一实施例, 如果视角信息所表示的视角为虚拟视角, 则书签创建单元 62创建的书签中携带的视角信息还可以包括虚拟视角的角 度对应的至少两个参考视角标识, 所述至少两个参考视角标识分别对应于 减少数据交互量。
终端设备 60的其他功能和操作可参照上面的方法实施例所述, 为避免 重复, 不再详细描述
图 7是本发明另一实施例的终端设备的框图。 图 7的终端设备 70的一 个例子是续播视频的终端设备 (第二终端), 例如图 4和图 5中的终端 B。 终端设备 70包括获取单元 71、 解析单元 72和播放单元 73。 获取单元 71获取书签, 所述书签根据设置断点的指示创建。 解析单元 72获取所述书签携带的的视频的时间信息, 以及与所述时间信息对应的播 放视频的视角信息。 播放单元 73按照所述视角信息所表示的视角, 从所述 时间信息所表示的断点处播放视频。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角观看视频, 提升了用户观看视频的视角体验。
获取单元 71可以从网络服务器获取书签, 或者可以直接从创建书签的 第一终端获取书签。
终端设备 70可实现上述图 1-图 5的实施例中涉及续播视频的终端设备 (第二终端)的操作。 例如, 如果解析单元 72获取的视角信息所表示的视 角为对应于真实摄像机的真实视角, 则该视角信息可包括真实视角的视角 标识, 此时播放单元 73可请求接入与真实视角的视角标识对应的视频流, 并从断点处播放视频流。 另一方面, 如果解析单元 72获取的视角信息所表示的视角为参考多个 真实视角合成的虚拟视角, 则该视角信息可包括虚拟视角的角度, 此时播 放单元 73可根据虚拟视角的角度, 确定虚拟视角的至少两个参考视角标识 (该至少两个参考视角标识分别对应于用于合成虚拟视角的至少两个真实 视角的视角标识), 请求接入对应于参考视角标识的视频流, 并将对应于参 考视角标识的视频流合成为对应于虚拟视角的视频内容, 从断点处播放所 合成的视频内容。 其中, 播放单元 73可从视角信息获取虚拟视角对应的参 考视角标识, 或者, 播放单元 73可以获取 EPG元数据 (该 EPG元数据携带 多视角视频的视角描述信息, 其中多视角视频的视角描述信息包括但不限 于真实视角对应的真实摄像机的标定数据, 或者包括真实视角与所述虚拟 视角的角度的对应关系;), 并根据所述真实视角对应的真实摄像机的标定 数据或者真实视角与虚拟视角的角度的对应关系, 以及根据虚拟视角的角 度信息, 确定参考视角标识。 例如, 多视角视频的视角描述信息可以是表 1 的标定数据或表 2的对应关系。 终端设备 70的其他功能和操作可参照上面的方法实施例所述, 为避免 重复, 不再详细描述。
图 8是本发明一个实施例的实现从断点处播放视频的装置的框图。图 8 的装置 80的一个例子是图 4和图 5中的网络服务器, 或者是图 6中的终端 设备 60。 装置 80包括接收单元 81和发送单元 82。 接收单元 81接收第二终端对书签的请求, 所述书签携带视频的时间信 息, 以及与所述时间信息对应的播放视频的视角信息。 发送单元 82向第二 终端发送书签, 以便第二终端按照视角信息所表示的视角, 从时间信息所 表示的断点处播放视频。 本发明实施例在书签中携带视角信息, 从而能够按照视角信息所表示 的视角续看视频, 提升了用户观看视频的视角体验。
装置 80可实现上述图 1-图 5的实施例中涉及网络服务器的操作。例如, 如果视角信息所表示的视角为对应于真实摄像机的真实视角, 则发送单元 82发送的书签中携带的视角信息可包括真实视角的视角标识。 或者, 如果 视角信息所表示的视角为参考多个真实视角合成的虚拟视角, 则发送单元 82发送的书签中携带的视角信息可包括虚拟视角的角度。 可选地, 如果视角信息所表示的视角为虚拟视角, 则发送单元 82发送 的书签中携带的视角信息还可以包括虚拟视角对应的至少两个参考视角标 识, 该至少两个参考视角标识分别对应于用于合成虚拟视角的至少两个真 实视角的视角标识。 这样第二终端可请求接入与至少两个参考视角标识对 应的视频流, 并将与至少两个参考视角标识对应的视频流合成为对应于虚 拟视角的视频内容, 从断点处播放所合成的视频内容, 因此无需接入所有 视频流, 可以节省带宽。 图 9是本发明另一实施例的实现从断点处播放视频的装置的框图。图 9 的装置 90与图 8的不同之处在于还包括生成单元 83和存储单元 84 , 将省 略与图 8中相同或相似的部件的详细描述。 生成单元 83用于生成携带多视角视频的视角描述信息的 EPG元数据, 以便第二终端能够根据多视角视频的视角描述信息和虚拟视角的角度, 确 定虚拟视角的参考视角标识。 例如, 如表 1和表 2所示, 上述多视角视频 的视角描述信息可包括但不限于真实视角对应的真实摄像机的标定数据, 或者真实视角与虚拟视角的角度的对应关系。 存储单元 84可以存储从第一终端接收的书签。 所述第一终端与第二终 端为同一终端或者为不同终端。 例如, 第一终端可在接收到设置断点的指 示时创建书签, 并将书签发送至网络服务器 90, 然后存储单元 84存储该书 签, 以便第二终端请求书签时由发送单元 82将该书签发送给第二终端。 存储单元 84在存储书签时, 可以与节目标识、 用户标识和 /或书签标 识相关联地存储书签, 以便后续提供给请求书签的终端。 本发明实施例不 限于这些具体的存储方式的例子, 而是可以按照任何现有方式存储书签。 装置 80和 90的其他功能和操作可参照上面图 3的方法实施例所述, 为避免重复, 不再详细描述。 例如, 装置 80或 90可以是上述网络服务器, 在此情况下, 装置 80或 90可接收第一终端发送的书签并存储该书签, 并 在第二终端请求该书签时发送给第二终端, 以便第二终端根据该书签观看 视频。 或者, 装置 80或 90可以是第一终端, 在此情况下, 装置 80或 90 直接将书签发送给第二终端, 以便第二终端根据该书签观看视频。 上述第 一终端和第二终端可以是同一终端, 也可以是不同的终端。 根据本发明实施例的视频播放系统可包括上述终端设备 60、 70或上述 装置 80、 90。 本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的 各示例的单元及算法步骤, 能够以电子硬件、 或者计算机软件和电子硬件 的结合来实现。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方 案的特定应用和设计约束条件。 专业技术人员可以对每个特定的应用来使 用不同方法来实现所描述的功能, 但是这种实现不应认为超出本发明的范 围。 所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述 描述的系统、 装置和单元的具体工作过程, 可以参考前述方法实施例中的 对应过程, 在此不再贅述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置 和方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅 是示意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实 现时可以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成 到另一个系统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论 的相互之间的耦合或直接耦合或通信连接可以是通过一些接口, 装置或单 元的间接耦合或通信连接, 可以是电性, 机械或其它的形式。 本发明实施例中的从断点处播放视频的装置可以为任何的计算机设 备, 而从断点处播放视频的装置执行的上述方法步骤都可以由计算机的处 理器执行, 从断点处播放视频的装置的功能单元也可以为运行于计算机的 的处理器中的功能单元。 此外, 本发明实施例的上述终端可以为任何的终 端设备, 如手机, PDA, 电脑, 远程控制器等, 执行上述终端的方法可以为 终端设备的处理器。 终端的功能单元也可以为运行于终端设备的处理器中 的功能单元。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的, 作为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地 方, 或者也可以分布到多个网络单元上。 可以根据实际的需要选择其中的 部分或者全部单元来实现本实施例方案的目的。 另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元 中, 也可以是各个单元单独物理存在, 也可以两个或两个以上单元集成在 一个单元中。 所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发 明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的 部分可以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储 介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 月良 务器, 或者网络设备等)执行本发明各个实施例所述方法的全部或部分步 骤。 而前述的存储介质包括: U盘、移动硬盘、只读存储器(ROM, Read-Only Memory ), 随机存取存储器 ( RAM, Random Acces s Memory ), 磁碟或者光盘 等各种可以存储程序代码的介质。 以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局 限于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可 轻易想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明 的保护范围应所述以权利要求的保护范围为准。

Claims

权利要求
1、 一种从断点处处播放视频的方法, 其特征在于, 包括: 获取书签, 所述书签根据设置断点的指示创建; 获取所述书签携带的视频的时间信息, 以及与所述时间信息对应的播 放视频的视角信息; 按照所述视角信息所表示的视角, 从所述时间信息所表示的断点处播 放视频。
2、 如权利要求 1所述的方法, 其特征在于, 所述视角信息包括对应于 真实摄像机的真实视角的视角标识, 所述按照所述视角信息所表示的视角, 从所述时间信息所表示的断点 处播放视频, 包括: 请求接入与所述真实视角的视角标识对应的视频流; 从所述断点处播放所述视频流。
3、 如权利要求 1所述的方法, 其特征在于, 所述视角信息包括参考至 少两个真实视角合成的虚拟视角的角度; 所述按照所述视角信息所表示的视角, 从所述时间信息所表示的断点 处播放视频, 包括: 根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个参考视角标 识, 所述至少两个参考视角标识分别对应于用于合成所述虚拟视角的至少 两个真实视角的视角标识; 请求接入与所述至少两个参考视角标识对应的视频流; 将所述与所述至少两个参考视角标识对应的视频流合成为对应于所述 虚拟视角的视频内容, 从所述断点处播放所合成的视频内容。
4、 如权利要求 3所述的方法, 其特征在于, 所述根据所述虚拟视角的 角度, 确定所述虚拟视角的至少两个参考视角标识, 包括: 从所述视角信息获取与所述虚拟视角对应的至少两个参考视角标识。
5、 如权利要求 3所述的方法, 其特征在于, 所述根据所述虚拟视角的 角度信息, 确定所述虚拟视角的至少两个参考视角标识, 包括: 获取电子节目指南 EPG元数据, 所述 EPG元数据携带多视角视频的视 角描述信息, 其中所述多视角视频的视角描述信息包括真实视角对应的真 实摄像机的标定数据, 或者包括真实视角与所述虚拟视角的角度的对应关 系; 根据所述真实视角对应的真实摄像机的标定数据或者真实视角与所述 虚拟视角的角度的对应关系, 以及根据所述虚拟视角的角度信息, 确定所 述参考视角标识。
6、 一种从断点处播放视频的方法, 其特征在于, 包括: 第一终端接收设置断点的指示; 所述第一终端根据所述指示创建书签, 所述书签携带视频的时间信息, 以及与所述时间信息对应的播放视频的视角信息; 所述第一终端发送所述书签, 以便第二终端按照所述视角信息所表示 的视角, 从所述时间信息所表示的断点处播放视频。
7、 如权利要求 6所述的方法, 其特征在于, 所述视角信息包括对应于真实摄像机的真实视角的视角标识, 以便于 第二终端请求接入与所述真实视角的视角标识对应的视频流; 并从所述断 点处播放所述视频流; 或 所述视角信息包括参考至少两个真实视角合成的虚拟视角的角度, 以 便于第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个参 考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚拟视 角的至少两个真实视角的视角标识; 请求接入与所述至少两个参考视角标 识对应的视频流; 并将所述与所述至少两个参考视角标识对应的视频流合 成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内 容。
8、 如权利要求 6所述的方法, 其特征在于, 如果所述视角信息所表示 的视角为所述虚拟视角, 则所述视角信息还包括所述虚拟视角对应的至少 两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述 虚拟视角的至少两个真实视角的视角标识, 以便所述第二终端请求接入与 所述至少两个参考视角标识对应的视频流, 并将所述与所述至少两个参考 视角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从所述断 点处播放所合成的视频内容。
9、 一种从断点处播放视频的方法, 其特征在于, 包括:
接收第二终端对书签的请求, 所述书签携带视频的时间信息, 以及与 所述时间信息对应的播放视频的视角信息;
向第二终端发送所述书签, 以便所述第二终端按照所述视角信息所表 示的视角, 从所述时间信息所表示的断点处播放视频。
1 0、 如权利要求 9所述的方法, 其特征在于, 所述视角信息包括对应于真实摄像机的真实视角的视角标识; 所述以 便所述第二终端按照所述视角信息所表示的视角, 从所述时间信息所表示 的断点处播放视频具体为: 第二终端请求接入与所述真实视角的视角标识 对应的视频流, 并从所述断点处播放所述视频流; 或 所述视角信息包括参考至少两个真实视角合成的虚拟视角的角度; 所 述以便所述第二终端按照所述视角信息所表示的视角, 从所述时间信息所 表示的断点处播放视频具体为: 第二终端根据所述虚拟视角的角度, 确定 所述虚拟视角的至少两个参考视角标识, 所述至少两个参考视角标识分别 与所述至少两个参考视角标识对应的视频流; 将所述与所述至少两个参考 视角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从所述断 点处播放所合成的视频内容; 或 所述视角信息包括所述虚拟视角的角度和所述虚拟视角对应的至少两 个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚 拟视角的至少两个真实视角的视角标识, 所述以便所述第二终端按照所述 视角信息所表示的视角, 从所述时间信息所表示的断点处播放视频具体为: 所述第二终端请求接入与所述至少两个参考视角标识对应的视频流, 并将 所述与所述至少两个参考视角标识对应的视频流合成为对应于所述虚拟视 角的视频内容, 从所述断点处播放所合成的视频内容。
11、 如权利要求 9所述的方法, 其特征在于, 如果所述视角信息包括 参考至少两个真实视角合成的虚拟视角的角度, 则所述方法还包括:
生成携带多视角视频的视角描述信息的电子节目指南 EPG元数据, 所 述多视角视频的视角描述信息包括真实视角对应的真实摄像机的标定数 据, 或者包括真实视角与所述虚拟视角的角度的对应关系, 以便所述第二 终端根据真实视角对应的真实摄像机的标定数据或者真实视角与所述虚拟 视角的角度的对应关系, 以及根据所述虚拟视角的角度信息, 确定所述参 考视角标识。
12、 如权利要求 9-1 1任一项所述的方法, 其特征在于, 所述方法还包 括:
存储从第一终端接收的所述书签。
1 3、 一种终端设备, 其特征在于, 包括: 获取单元, 用于获取书签, 所述书签根据设置断点的指示创建; 解析单元, 用于获取所述书签携带的视频的时间信息, 以及与所述时 间信息对应的播放视频的视角信息;
播放单元, 用于按照所述视角信息所表示的视角, 从所述时间信息所 表示的断点处播放视频。
14、 如权利要求 1 3所述的终端设备, 其特征在于, 如果所述解析单元 获取的视角信息包括对应于真实摄像机的真实视角的视角标识, 贝' J 所述播放单元具体用于请求接入与所述真实视角的视角标识对应的视 频流, 并从所述断点处播放所述视频流。
15、 如权利要求 1 3所述的终端设备, 其特征在于, 如果所述解析单元 获取的视角信息包括参考至少两个真实视角合成的虚拟视角的角度, 贝' J 所述播放单元具体用于根据所述虚拟视角的角度, 确定所述虚拟视角 的至少两个参考视角标识, 所述至少两个参考视角标识分别对应于用于合 成所述虚拟视角的至少两个真实视角的视角标识; 请求接入对应于所述至 少两个参考视角标识的视频流, 并将所述对应于所述至少两个参考视角标 识的视频流合成为对应于所述虚拟视角的视频内容, 从所述断点处播放所 合成的视频内容。
16、 如权利要求 15所述的终端设备, 其特征在于, 所述播放单元具体用于从所述视角信息获取与所述虚拟视角的角度对 应的至少两个参考视角标识; 或者, 所述播放单元具体用于获取电子节目指南 EPG元数据, 所述 EPG元数 据携带多视角视频的视角描述信息, 其中所述多视角视频的视角描述信息 包括真实视角对应的真实摄像机的标定数据, 或者包括真实视角与所述虚 拟视角的角度的对应关系; 根据所述真实视角对应的真实摄像机的标定数 据或者真实视角与所述虚拟视角的角度的对应关系, 以及根据所述虚拟视 角的角度信息, 确定所述参考视角标识。
17、 一种终端设备, 其特征在于, 包括: 指示接收单元, 用于接收设置断点的指示; 书签创建单元, 用于根据所述指示接收单元所接收的指示创建书签, 所述书签携带视频的时间信息, 以及与所述时间信息对应的播放视频的视 角信息;
书签传送单元, 用于发送所述书签, 以便第二终端按照所述视角信息 所表示的视角, 从所述时间信息所表示的断点处播放视频。
18、 如权利要求 17所述的终端设备, 其特征在于, 所述书签创建单元创建的书签中携带的视角信息包括对应于真实摄像 机的真实视角的视角标识, 以便于第二终端请求接入与所述真实视角的视 角标识对应的视频流; 并从所述断点处播放所述视频流; 或 所述视角信息包括参考至少两个真实视角合成的虚拟视角的角度, 以 便于第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个参 考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚拟视 角的至少两个真实视角的视角标识; 请求接入与所述至少两个参考视角标 识对应的视频流; 并将所述与所述至少两个参考视角标识对应的视频流合 成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内 ^-; 或 所述视角信息包括所述虚拟视角的角度和所述虚拟视角对应的至少两 个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚 拟视角的至少两个真实视角的视角标识, 以便所述第二终端请求接入与所 述至少两个参考视角标识对应的视频流, 并将所述与所述至少两个参考视 角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从所述断点 处播放所合成的视频内容。
19、 一种实现从断点处播放视频的装置, 其特征在于, 包括: 接收单元, 用于接收第二终端对书签的请求, 所述书签携带视频的时 间信息, 以及与所述时间信息对应的播放视频的视角信息; 发送单元, 用于向第二终端发送所述书签, 以便所述第二终端按照所 述视角信息所表示的视角, 从所述时间信息所表示的断点处播放视频。
20、 如权利要求 19所述的装置, 其特征在于, 所述发送单元发送的书签中携带的视角信息包括对应于真实摄像机的 真实视角的视角标识, 以便所述第二终端请求接入与所述真实视角的视角 标识对应的视频流, 并从所述断点处播放所述视频流; 或 所述视角信息包括参考至少两个真实视角合成的虚拟视角的角度, 以 便所述第二终端根据所述虚拟视角的角度, 确定所述虚拟视角的至少两个 参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚拟 视角的至少两个真实视角的视角标识; 请求接入与所述至少两个参考视角 标识对应的视频流; 将所述与所述至少两个参考视角标识对应的视频流合 成为对应于所述虚拟视角的视频内容, 从所述断点处播放所合成的视频内 ^-; 或 所述视角信息包括所述虚拟视角的角度和所述虚拟视角对应的至少两 个参考视角标识, 所述至少两个参考视角标识分别对应于用于合成所述虚 拟视角的至少两个真实视角的视角标识, 以便所述第二终端请求接入与所 述至少两个参考视角标识对应的视频流, 并将所述与所述至少两个参考视 角标识对应的视频流合成为对应于所述虚拟视角的视频内容, 从所述断点 处播放所合成的视频内容。
21、如权利要求 19所述的装置,其特征在于, 所述网络服务器还包括: 生成单元, 用于生成携带多视角视频的视角描述信息的电子节目指南 EPG 元数据, 所述多视角视频的视角描述信息包括真实视角对应的真实摄像机 的标定数据, 或者包括真实视角与所述虚拟视角的角度的对应关系, 以便 所述第二终端根据真实视角对应的真实摄像机的标定数据或者真实视角与 所述虚拟视角的角度的对应关系, 以及根据所述虚拟视角的角度信息, 确 定所述参考视角标识。
22、 如权利要求 19-21任一项所述的装置, 其特征在于, 所述装置还 包括:
存储单元, 用于存储从第一终端接收的所述书签。
PCT/CN2012/078605 2011-09-07 2012-07-13 从断点处播放视频的方法和设备 WO2013034030A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110264066.8 2011-09-07
CN201110264066.8A CN102984560B (zh) 2011-09-07 2011-09-07 从断点处播放视频的方法和设备

Publications (1)

Publication Number Publication Date
WO2013034030A1 true WO2013034030A1 (zh) 2013-03-14

Family

ID=47831508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/078605 WO2013034030A1 (zh) 2011-09-07 2012-07-13 从断点处播放视频的方法和设备

Country Status (2)

Country Link
CN (1) CN102984560B (zh)
WO (1) WO2013034030A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686381A (zh) * 2013-12-13 2014-03-26 乐视致新电子科技(天津)有限公司 智能电视及其浏览器中视频播放记录的处理方法和装置
CN103747295B (zh) * 2014-01-28 2017-03-01 北京智谷睿拓技术服务有限公司 服务信息交互方法及设备
CN106409031B (zh) * 2015-08-03 2020-11-13 北京鸿合智能系统有限公司 一种录播学生端记录问题的方法和装置
CN108307197A (zh) * 2015-12-01 2018-07-20 幸福在线(北京)网络技术有限公司 虚拟现实视频数据的传输方法、播放方法及装置和系统
CN106170094B (zh) * 2016-09-07 2020-07-28 阿里巴巴(中国)有限公司 全景视频的直播方法及装置
CN106686368A (zh) * 2016-12-26 2017-05-17 华为软件技术有限公司 虚拟现实vr视频播放的设备和播放vr视频的方法
CN108156467B (zh) * 2017-11-16 2021-05-11 腾讯科技(成都)有限公司 数据传输方法和装置、存储介质及电子装置
WO2022214047A1 (en) * 2021-04-08 2022-10-13 Beijing Bytedance Network Technology Co., Ltd. Supplemental enhancement information message constraints

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200615925A (en) * 2004-11-08 2006-05-16 Koninkl Philips Electronics Nv Player for optical disc and its play back method
CN101741841A (zh) * 2009-12-10 2010-06-16 青岛海信宽带多媒体技术有限公司 一种实现多媒体设备间断点续播的方法及装置
CN102104623A (zh) * 2010-12-20 2011-06-22 广州市动景计算机科技有限公司 通过移动终端进行媒体文件断点续播的方法和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070157072A1 (en) * 2005-12-29 2007-07-05 Sony Ericsson Mobile Communications Ab Portable content sharing
US8051081B2 (en) * 2008-08-15 2011-11-01 At&T Intellectual Property I, L.P. System and method for generating media bookmarks
CN101662693B (zh) * 2008-08-27 2014-03-12 华为终端有限公司 多视点媒体内容的发送和播放方法、装置及系统
US20110173524A1 (en) * 2010-01-11 2011-07-14 International Business Machines Corporation Digital Media Bookmarking Comprising Source Identifier

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200615925A (en) * 2004-11-08 2006-05-16 Koninkl Philips Electronics Nv Player for optical disc and its play back method
CN101741841A (zh) * 2009-12-10 2010-06-16 青岛海信宽带多媒体技术有限公司 一种实现多媒体设备间断点续播的方法及装置
CN102104623A (zh) * 2010-12-20 2011-06-22 广州市动景计算机科技有限公司 通过移动终端进行媒体文件断点续播的方法和系统

Also Published As

Publication number Publication date
CN102984560A (zh) 2013-03-20
CN102984560B (zh) 2017-06-20

Similar Documents

Publication Publication Date Title
WO2013034030A1 (zh) 从断点处播放视频的方法和设备
CN109691123B (zh) 用于受控观察点和取向选择视听内容的方法和装置
US20190104326A1 (en) Content source description for immersive media data
US11094130B2 (en) Method, an apparatus and a computer program product for video encoding and video decoding
JP2019521583A (ja) イメージ中の最も関心のある領域の高度なシグナリング
CN111417008B (zh) 用于虚拟现实的方法、装置和计算机可读介质
EP2417770A1 (en) Methods and apparatus for efficient streaming of free view point video
US11805303B2 (en) Method and apparatus for storage and signaling of media segment sizes and priority ranks
US10931930B2 (en) Methods and apparatus for immersive media content overlays
CN112219403B (zh) 沉浸式媒体的渲染视角度量
EP3777137A1 (en) Method and apparatus for signaling of viewing extents and viewing space for omnidirectional content
CN112188219B (zh) 视频接收方法和装置以及视频发送方法和装置
US20220150296A1 (en) Method and apparatus for grouping entities in media content
US20230224512A1 (en) System and method of server-side dynamic adaptation for split rendering
JP2022545880A (ja) コードストリームの処理方法、装置、第1端末、第2端末及び記憶媒体
EP3777219B1 (en) Method and apparatus for signaling and storage of multiple viewpoints for omnidirectional audiovisual content
WO2023103875A1 (zh) 自由视角视频的视角切换方法、装置及系统
US20230007314A1 (en) System and method of server-side dynamic spatial and temporal adaptations for media processing and streaming

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12830278

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12830278

Country of ref document: EP

Kind code of ref document: A1