CN111866525A - Multi-view video playing control method and device, electronic equipment and storage medium - Google Patents

Multi-view video playing control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111866525A
CN111866525A CN202011006555.9A CN202011006555A CN111866525A CN 111866525 A CN111866525 A CN 111866525A CN 202011006555 A CN202011006555 A CN 202011006555A CN 111866525 A CN111866525 A CN 111866525A
Authority
CN
China
Prior art keywords
video
viewpoint
video stream
stream data
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011006555.9A
Other languages
Chinese (zh)
Inventor
刘阿海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011006555.9A priority Critical patent/CN111866525A/en
Publication of CN111866525A publication Critical patent/CN111866525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand

Abstract

The embodiment of the application discloses a playing control method and device for a multi-view video, wherein the multi-view video can be stored on a cloud server to be acquired by a video playing client. The method comprises the following steps: acquiring video stream data acquired based on each viewpoint according to a video stream address set corresponding to a multi-viewpoint video, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock; playing video stream data corresponding to the first viewpoint; when a viewpoint switching instruction is detected, determining a second viewpoint according to the viewpoint switching instruction; and positioning a starting video frame in the video stream data corresponding to the second viewpoint, wherein the coding time stamp contained in the starting video frame is the same as the coding time stamp contained in the currently played video frame, and switching to play the video stream data corresponding to the second viewpoint from the starting video frame. The technical scheme of the embodiment of the application can realize seamless switching of video pictures of different viewpoints and support multi-angle watching of videos.

Description

Multi-view video playing control method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method and an apparatus for controlling playing of a multi-view video, a method and an apparatus for processing a multi-view video, and an electronic device and a computer-readable storage medium.
Background
The cloud technology is a hosting technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data. The application of cloud technology is very common, for example, in the field of video processing, a video source is stored in a cloud end, and a video playing client performs video playing by acquiring video stream data from the cloud end.
At present, the video source only presents a picture effect of a certain visual angle when playing, and if a user wants to see pictures of different visual angles, for example, the user wants to see pictures of a certain player in a ball game at a plurality of visual angles when shooting, the user cannot meet the requirement. Therefore, how to support multi-angle viewing during video playing is a technical problem yet to be solved in the prior art.
Disclosure of Invention
In order to solve the foregoing technical problem, embodiments of the present application provide a method and an apparatus for controlling playing of a multi-view video, and also provide a method and an apparatus for processing a multi-view video, and provide an electronic device and a computer-readable storage medium.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of the present application, there is provided a playing control method of a multi-view video, including: acquiring video stream data acquired based on each viewpoint according to a video stream address set corresponding to a multi-viewpoint video, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock; playing video stream data corresponding to the first viewpoint; when a viewpoint switching instruction is detected, determining a second viewpoint according to the viewpoint switching instruction; and positioning a starting video frame in the video stream data corresponding to the second viewpoint, wherein the coding time stamp contained in the starting video frame is the same as the coding time stamp contained in the currently played video frame, and starting to switch to play the video stream data corresponding to the second viewpoint from the starting video frame.
According to another aspect of the present application, there is provided a video processing method of a multi-view video, including: acquiring video stream data acquired based on each viewpoint, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock, and the video stream data corresponds to the same video identifier; respectively storing the video stream data acquired based on each viewpoint to obtain a video stream address corresponding to each video stream data; when a multi-viewpoint video acquisition request sent by a video playing client is received, returning a video stream address set formed by video stream addresses corresponding to all the video stream data to the video playing client according to the video identification contained in the viewpoint video acquisition request.
According to an aspect of the present application, there is provided a playback control apparatus for a multi-view video, including: the multi-view video acquisition module is used for acquiring video stream data acquired based on each view according to a video stream address set corresponding to a multi-view video, wherein the video stream data is obtained by encoding video source signals acquired by each view based on a synchronous clock; the video playing module is used for playing video stream data corresponding to the first viewpoint; the viewpoint switching module is used for determining a second viewpoint according to the viewpoint switching instruction when the viewpoint switching instruction is detected; and the video playing switching module is used for positioning a starting video frame in the video stream data corresponding to the second viewpoint, wherein the coding time stamp of the starting video frame is the same as the coding time stamp of the currently played video frame, and switching to play the video stream data corresponding to the second viewpoint from the starting video frame.
According to another aspect of the present application, there is provided a video processing apparatus for multi-view video, comprising: the video data acquisition module is used for acquiring video stream data acquired based on each viewpoint, the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock, and the video stream data corresponds to the same video identifier; the video data storage module is used for respectively storing the video stream data acquired based on each viewpoint to obtain a video stream address corresponding to each video stream data; and the video playing response module is used for returning a video stream address set formed by video stream addresses corresponding to all the video stream data to the video playing client according to the video identification contained in the viewpoint video acquisition request when receiving a multi-viewpoint video acquisition request sent by the video playing client.
According to an aspect of the present application, there is provided an electronic device including a processor and a memory, the memory having stored thereon computer-readable instructions, which, when executed by the processor, implement the play control method of a multi-view video or the video processing method of a multi-view video as described above.
According to an aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to execute a play control method of a multi-view video or a video processing method of a multi-view video as described above.
In the technical scheme, the video collectors arranged at the multiple viewpoints simultaneously perform video collection on the same scene to obtain multiple paths of video source signals, and the multiple paths of video source signals are encoded based on the synchronous clock, so that video frame data contained in each path of video stream data and encoding time stamps corresponding to the audio frame data are synchronous in the multiple paths of video stream data obtained by encoding. In the process of video playing, when the video stream needs to be switched from the first viewpoint to the second viewpoint, the seamless switching of the video pictures of different viewpoints can be realized by positioning the initial video frame with the encoding timestamp same as the currently played video frame in the video stream data corresponding to the second viewpoint and then switching to play the video stream data corresponding to the second viewpoint from the initial video frame, thereby supporting the multi-angle watching of the video.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic illustration of an implementation environment to which the present application relates.
Fig. 2 is a schematic diagram of a video playing system proposed based on the implementation environment shown in fig. 1.
Fig. 3 is a flowchart illustrating a playing control method of a multi-view video according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a play control method of a multi-view video according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating a play control method of a multi-view video according to another exemplary embodiment.
Fig. 6 is a flowchart illustrating a video processing method of a multi-view video according to an exemplary embodiment.
Fig. 7 is a block diagram of a playback control apparatus for a multi-view video according to an exemplary embodiment.
Fig. 8 is a block diagram of a video processing apparatus for multi-view video according to an exemplary embodiment.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the description of the present application, "a plurality" means at least two unless otherwise specified.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment related to the present application. The implementation environment includes a terminal 100 and a server 200, and the terminal 100 and the server 200 communicate with each other through a wired or wireless network.
The server 200 stores video stream data obtained by video acquisition of the same scene by video collectors arranged at multiple viewpoints, that is, the server 200 stores multiple paths of video stream data corresponding to multi-viewpoint videos, and the video stream data corresponding to each viewpoint is obtained by encoding based on a synchronous clock.
A video playing client is operated in the terminal 100, and when the terminal 100 plays a video, video stream data corresponding to each viewpoint is acquired, and a video stream corresponding to a default viewpoint is selected for playing. The video playing interface of the terminal 100 further provides a function of switching playing angles for the user, if the user triggers a user action of switching playing angles in the video playing interface, the terminal 100 switches to play video stream data corresponding to another viewpoint, and the video pictures switched and played by the terminal 100 are synchronous with the video pictures displayed in the video playing interface during switching, so that multi-angle watching of videos is supported.
The terminal 100 may be any electronic device capable of operating a video playing client, such as a smart phone, a tablet, a notebook, and a computer, and the number of the terminals 100 may be one or multiple (only 2 are shown in fig. 1), which is not limited herein. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, which is not limited herein.
Fig. 2 is a schematic diagram of a video playing system proposed based on the implementation environment shown in fig. 1, which is capable of supporting multi-angle viewing of videos, and a user can freely switch between different angles to view corresponding video contents.
As shown in fig. 2, the video playing system includes a plurality of video collectors 10, a video encoder 20, a stream pusher 30, a video server 40, and a video playing terminal 50. The video server 40 corresponds to the server 200 in the implementation environment shown in fig. 1, and the video playback terminal 50 corresponds to the terminal 100 in the implementation environment shown in fig. 1.
Multiple video collectors 10 are each deployed around the same scene to simultaneously collect the video source signals of that scene from different angles. For example, in a game scene, a plurality of video collectors 10 can be arranged around the course to form a 360-degree video collecting environment, so that the game situation can be shot from a plurality of angles at the same time. Therefore, a plurality of video collectors 10 can be used as different viewpoints to simultaneously collect videos of the same scene, and the collected multi-viewpoint videos can be used for watching pictures of different viewpoints by switching different viewpoints during playing, so that multi-angle watching of the videos is supported.
The video encoder 20 is configured to encode audio and video signals, and in the video playing system shown in fig. 2, is specifically configured to encode video source signals acquired by each video acquirer 10 to convert analog audio and video signals into audio and video data, so as to obtain video stream data acquired based on each viewpoint.
The video encoder 20 includes an encoding clock to add an encoding time stamp to each frame of video image and each frame of audio obtained by encoding, so as to identify the capturing time of each frame of video image and each frame of audio by the added encoding time stamp. When the video encoder 20 encodes multiple video source signals collected by multiple video collectors 10, it is necessary to ensure that encoding timestamps added to video frames and audio frames collected by the video collectors 10 at the same time are completely consistent, that is, encoding processing performed by the video encoder 20 for the multiple video source signals is synchronous, so an encoding clock included in the video encoder 20 is also referred to as a synchronous clock.
The stream pusher 30 is configured to push video stream data acquired based on each viewpoint and obtained by encoding processing performed by the video encoder 20 to the video server 40, so that the video server 40 stores the received multiple paths of video stream data. The stream driver 30 may be specifically a device, or may be a program running in the video encoder 20; the multiple video stream data may be permanently stored in the video server 40, or may be buffered in the video server 40 and then transmitted to the video playback terminal 50, which is not limited herein.
In some other embodiments, the video server 40 may further obtain the multiple video stream data encoded by the video encoder 20 by other methods, for example, in a video-on-demand application scenario, the multiple video stream data encoded by the video encoder 20 may be directly copied to the video server 40, which is not limited herein.
When the video server 40 receives a multi-view video acquisition request sent by the video playing terminal 50, the video stream addresses of the video stream data collected from each view are returned to the video playing terminal 50. The video playing terminal 50 obtains the video stream data collected from each viewpoint from the video server 40 according to the received video stream address set, and plays the video stream data corresponding to the first viewpoint. It is to be understood that the first viewpoint is a default viewpoint, and for example, the video collector 10 arranged above and to the left side of the scene in fig. 2 can be used as the default viewpoint.
In the process of playing the video stream data corresponding to the first viewpoint, if the video playing terminal 50 detects the viewpoint switching instruction, the video stream data corresponding to the second viewpoint is synchronously played. The synchronous playing means that the coding time stamp contained in the last frame of video image played by the first viewpoint is the same as the coding time stamp contained in the starting video frame played by the second viewpoint, so that after the viewpoints are switched, the video pictures seen by a user are still close and continuous, the seamless switching of video playing of different viewpoints is realized, and the reality of user experience can be deepened.
As also shown in fig. 2, the video server 40 is mainly composed of a media server 41 and a requesting video information server 42. The media server 41 is specifically configured to store multiple paths of video stream data pushed by the stream pusher 30, for example, in an application scenario of video on demand, the multiple paths of video stream data may be stored in the media content storage 43, and in a live video scenario, the multiple paths of video stream data may be buffered and the buffered data may be forwarded. The request video information server 42 is specifically configured to classify the multiple paths of video stream data pushed by the stream pusher 30, for example, to distinguish which viewpoint corresponds to each path of video stream data, how many video angle intervals the viewpoint corresponds to, and further analyze each path of video stream data to obtain video information corresponding to each path of video stream data. The video information may include information such as video identifiers, video height and width, code rate, encoding and decoding formats, which is not limited herein, but it should be noted that the video identifiers of the video stream data collected based on different viewpoints in the same scene are the same.
The video playing terminal 50 specifically includes a playing information request module 51, a video stream management module 52, a video stream decoding module 53, a rendering management module 54, and an action parsing module 55. When the video playing terminal 50 needs to play a video, the playing information request module 51 requests the video information requesting server 42 to acquire a video stream address set of a multi-view video corresponding to a video identifier according to the video identifier of the video to be played, and sends the video stream address set to the video stream management module 52. The video stream management module 52 obtains video stream data collected from each viewpoint from the media server 41 according to the video stream address set, and analyzes the obtained multiple paths of video stream data into video frame data and audio frame data, respectively. The video stream decoding module 53 performs encoding processing on the video frame data and the audio frame data analyzed and obtained by the video stream management module 52, and obtains image data and audio sample data corresponding to each viewpoint.
The rendering management module 54 renders the image data corresponding to the first viewpoint decoded by the video stream decoding module 53 into a playing picture, and synchronously plays the audio sampling data corresponding to the first viewpoint, thereby playing the video at the default viewpoint. The action analysis module 55 analyzes the user action triggered in the video playing interface of the video playing terminal 50 to obtain the line-of-sight rotation direction and the line-of-sight rotation angle indicated by the user action, and outputs the obtained analysis result to the rendering management module 54, so that the rendering management module 54 switches to render the video stream data corresponding to the second viewpoint according to the analysis result, so that the video playing terminal 50 synchronously plays the video stream data corresponding to the second viewpoint.
Therefore, the video playing system provided by the embodiment can support multi-angle watching of videos, a user can watch corresponding video contents by freely switching different angles by controlling the video playing terminal 50, and continuous rotation of a certain video picture at each angle can be seamlessly performed, so that the user can obtain very real user experience.
Referring to fig. 3, fig. 3 is a flowchart illustrating a playing control method of a multi-view video according to an exemplary embodiment.
The method may be applied to the implementation environment shown in fig. 1, for example, specifically executed by the terminal 100 in the implementation environment shown in fig. 1, or executed by both the terminal 100 and the server 200 in the implementation environment shown in fig. 1.
The method can also be applied to the video playing system shown in fig. 2, for example, specifically executed by the video playing terminal 50 shown in fig. 2, or executed by the video playing terminal 50 shown in fig. 2 and the video server 40 together.
The following describes the content of the method in detail, taking the method as an example to be executed in the video playback terminal 50.
As shown in fig. 3, in an exemplary embodiment, the method includes at least the steps of:
and step 110, acquiring video stream data acquired based on each viewpoint according to a video stream address set corresponding to the multi-viewpoint video, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock.
It should be noted that, a multi-view video is a multi-channel video obtained by simultaneously performing video acquisition on the same scene according to a plurality of video collectors arranged at different viewpoints, and therefore, the multi-view video records the scene state of the same scene at the same time from different viewpoints.
In the process of video playing, videos collected based on different viewpoints are played in a switching mode, and the video pictures played before and after the switching mode are the video pictures collected at the same moment and from different angles, so that seamless switching of the video pictures of the different viewpoints can be achieved, the effect of watching the videos at multiple angles is achieved, and the watching requirements of users are met.
After the video source signals are collected by the plurality of video collectors arranged at different viewpoints, for example, the video encoder shown in fig. 2 may encode each video source signal based on a synchronous clock, so as to obtain video stream data collected based on each viewpoint, and thus, in the video stream data corresponding to each viewpoint, the encoding timestamps added to the video frames and the audio frames collected at the same time are completely the same.
The video stream data collected based on each viewpoint is stored in a server, for example, the server 200 shown in fig. 1 or the video server 40 shown in fig. 2, so the set of video stream addresses corresponding to the multi-viewpoint video is a set formed by the video stream addresses of the video stream data.
As described above, the video stream data collected based on each viewpoint has the same video identifier, and therefore when a video playing instruction is detected, for example, a user opens a video playing client to require multi-viewpoint video playing, a multi-viewpoint video acquisition request is sent to a server, where the multi-viewpoint video acquisition request includes a video identifier corresponding to the multi-viewpoint video requested to be played. If a response message returned by the server for the multi-view video acquisition request is received, the response message needs to be analyzed to obtain a video stream address set associated with the video identifier, that is, a video stream address set corresponding to the multi-view video.
And then, acquiring video stream data acquired based on each viewpoint from the server according to the video stream address set corresponding to the multi-viewpoint video, and controlling and playing the video stream data corresponding to each viewpoint so as to realize multi-angle playing of the video at a video playing end.
And step 130, playing the video stream data corresponding to the first viewpoint.
The first viewpoint is a default viewpoint, for example, a video picture acquired from the default viewpoint is a video picture with an optimal viewing angle. When the video playing is started, the video pictures collected based on the default viewpoints are preferentially selected to be played, so that the best watching feeling can be brought to the user, and the user experience is improved.
The video stream data corresponding to each viewpoint actually includes image data and audio sample data collected from each viewpoint, and the image data and the audio sample data contained in the video source data collected at the same time based on the same viewpoint are encoded by adding the same encoding time stamp, so that the image data and the audio sample data collected from each viewpoint are synchronized with each other based on the encoding time stamp.
Therefore, the image data and the audio sampling data collected based on the first viewpoint can be obtained according to the video stream data corresponding to the first viewpoint, then the image data is subjected to image rendering to obtain a playing image of the video stream data corresponding to the first viewpoint, and the audio sampling data is synchronously played according to the collection time, so that the playing of the video stream data corresponding to the first video is realized.
And 150, when the viewpoint switching instruction is detected, determining a second viewpoint according to the viewpoint switching instruction.
The viewpoint switching instruction is used to instruct switching of video viewing angles, and therefore when the viewpoint switching instruction is detected, a second viewpoint to be switched by the viewpoint switching instruction needs to be determined first, and then a video picture corresponding to a currently played first viewpoint is switched to play a video picture corresponding to the second viewpoint, so that switching of video viewing angles is achieved once. If the video watching angle needs to be switched for multiple times, a viewpoint switching instruction can be triggered and generated for multiple times, and another viewpoint switched by the viewpoint switching instruction is determined based on the currently played viewpoint.
In one embodiment, the viewpoint switching instruction carries the line of sight rotation direction and the line of sight rotation angle to indicate the viewpoint to be switched based on the line of sight rotation direction and the line of sight rotation angle. Therefore, it is necessary to first determine the line of sight rotation direction and the line of sight rotation angle indicated by the viewpoint switching instruction, and then determine the second viewpoint from the plurality of viewpoints based on the first viewpoint according to the line of sight rotation direction and the line of sight rotation angle.
For example, a rectangular coordinate system may be constructed with the center of a scene as an origin, and then corresponding coordinate points may be determined in the rectangular coordinate system according to the arrangement positions of the video collectors, where the coordinate points are used to represent viewpoints, and the positions of the viewpoints relative to the center of the scene may be obtained according to the arrangement positions of the video collectors, so that an angle interval where the viewpoints are located may be determined. For example, in the arrangement state of the plurality of video collectors 10 shown in fig. 2, if the rectangular coordinate system is divided into 8 angular intervals on average, each viewpoint corresponds to one average angular interval. In some other embodiments, the size of the angle interval in which each viewpoint is located may also be different, and this embodiment does not limit this.
And positioning a first viewpoint in the rectangular coordinate system, and rotating according to the sight line rotating direction and the sight line rotating angle indicated by the viewpoint switching instruction to obtain a target coordinate point, wherein the target coordinate point is a viewpoint position corresponding to the video watching angle indicated and switched by the viewpoint switching instruction. And the viewpoint in the angle interval of the target coordinate point is taken as the second viewpoint because the scene watching visual angle corresponding to the viewpoint in the angle interval of the target coordinate point is closest to the scene watching visual angle corresponding to the viewpoint position which is indicated to be switched by the viewpoint switching instruction.
The second viewpoint may also be determined in other manners, for example, based on that the video stream data collected by each viewpoint contains the view angle of each viewpoint, the view angle corresponding to the first viewpoint is rotated according to the view rotation direction and the view rotation angle indicated by the viewpoint switching instruction, and if the obtained new implementation angle is closest to the implementation angle corresponding to which viewpoint, the viewpoint is determined as the second viewpoint.
In other embodiments, a plurality of video collectors distributed around the scene may be identified in advance, that is, each viewpoint has a different viewpoint identifier, and the viewpoint switching instruction may directly indicate the viewpoint to be switched according to the specific viewpoint identifier, so as to simplify the viewpoint switching process, but need to obtain identification information such as the number of each video collector distributed in the scene in advance.
Step 170, positioning a starting video frame in the video stream data corresponding to the second viewpoint, where the coding time stamp of the starting video frame is the same as the coding time stamp of the currently played video frame, and starting to switch to play the video stream data corresponding to the second viewpoint from the starting video frame.
If a watching scene of a ball game is taken as an example, the requirement for watching a certain player to shoot a wonderful shot from various angles can be met, and in order to ensure better user watching experience, it is required to ensure that the switching of video pictures corresponding to different viewpoints is continuous.
Based on this, in this embodiment, the starting video frame is positioned in the video stream data corresponding to the second viewpoint, the encoding timestamp contained in the positioned starting video frame is the same as the encoding timestamp contained in the currently played video frame, which indicates that the starting video frame and the video frame corresponding to the currently played first viewpoint are acquired simultaneously, and the video stream data corresponding to the second viewpoint is switched and played from the starting video frame, so that the video pictures played at the point-of-tangency time are acquired based on different viewing angles at the same time, thereby ensuring the continuity of the video pictures. In the video playing process, the continuous movement of each angle of a certain video picture is realized in a seamless mode, so that details in a scene can be displayed in the played detail picture in a dripping manner, and the requirement for multi-angle switching of video playing is fully met.
Therefore, in the present embodiment, the encoding is performed based on the synchronous clock at the encoding stage of the multi-channel video source signal, so that the encoding time stamps corresponding to the video frame data and the audio frame data contained in each channel of video stream data are synchronous in the multi-channel video stream data obtained by encoding; in the video playing stage, when the video stream needs to be switched from the first viewpoint to the second viewpoint, the starting video frame with the encoding timestamp same as that of the currently played video frame is positioned in the video stream data corresponding to the second viewpoint, and then the video stream data corresponding to the second viewpoint is switched and played from the starting video frame, so that the first frame video picture played after switching and the last frame video picture played before switching are acquired at the same time, and the two stages are mutually dependent, thereby realizing the seamless switching of the video pictures of different viewpoints.
In another exemplary embodiment, as shown in fig. 4, before step 130, the method for controlling the playing of the multi-view video may further include the steps of:
step 210, analyzing the video stream data collected based on each viewpoint to obtain video frame data and audio frame data contained in the video stream data corresponding to each viewpoint.
In this embodiment, after a set of video stream addresses corresponding to multi-view video is obtained, video stream data collected based on each view is obtained, and the video stream data contains encoded video frame data and audio frame data.
The process of playing video stream data is actually the process of rendering image data and synchronously outputting audio sample data. Therefore, in this embodiment, the video frame data and the audio frame data contained in the video stream data corresponding to each viewpoint are obtained by analyzing the video stream data collected based on each viewpoint, and further, the video frame data and the audio frame data obtained by analyzing may be decoded to obtain corresponding image data and audio sample data.
Step 230, decoding the video frame data and the audio frame data contained in the video stream data corresponding to each viewpoint to obtain image data and audio sample data collected based on each viewpoint.
The decoding processing of the video frame data and the audio frame data contained in the video stream data corresponding to each viewpoint means that the video frame data contained in the video stream data is decoded according to a decoding rule corresponding to a video coding rule to restore the image signal acquired by the video acquisition unit to obtain the image data corresponding to each frame of video, and the audio frame data contained in the video stream data is decoded to restore the audio sampling signal obtained by sampling to obtain the audio sampling data corresponding to each frame of audio.
Therefore, in this embodiment, video stream data corresponding to all viewpoints are analyzed and decoded, when video stream data corresponding to a first viewpoint needs to be played, a video picture corresponding to the first viewpoint is played based on image data and audio sample data corresponding to the first viewpoint obtained in this embodiment, and when video stream data corresponding to a second viewpoint needs to be switched and played, a video picture corresponding to the second viewpoint is played based on image data and audio sample data corresponding to the second viewpoint obtained in this embodiment.
Therefore, in this embodiment, the video playing end interacts with the server only once to obtain video stream data acquired based on multiple viewpoints, and when playing a video corresponding to a default viewpoint and switching and playing videos corresponding to other viewpoints, the video playing end switches corresponding viewpoints based on image data and audio sampling data obtained by self analysis and decoding, and does not need to interact with the server in the whole switching process, so that time is saved, the video playing end can make a quick response to the playing of the video corresponding to the default viewpoint and the switching and playing of the videos corresponding to other viewpoints, and the continuity of switched video pictures is further ensured.
In another exemplary embodiment, as shown in fig. 5, after step 130, the method for controlling the playing of the multi-view video may further include the steps of:
step 310, detecting a triggered user action in the playing process of the video stream data corresponding to the first viewpoint.
In this embodiment, the viewpoint switching instruction for instructing viewpoint switching is generated based on a user action triggered in the video playing interface, so that the multi-angle playing scheme of the video has a user interaction effect, and user experience can be improved.
In the playing process of the video stream data corresponding to the first viewpoint, it is detected whether a user action is triggered in the video playing interface, for example, whether a user trigger triggers a sliding action in the video playing interface or whether a clicking action is triggered in a viewpoint switching area set in the video playing interface. The viewpoint switching area set in the video playing interface may refer to a viewpoint switching control set in the video playing interface, and if the user clicks the viewpoint switching control, it indicates that the user wants to switch and watch video pictures corresponding to other viewpoints.
In step 330, when the user action is detected, the gaze rotation direction and gaze rotation angle indicated by the user action are determined.
When the user action is detected, the sight line rotation direction and the sight line rotation angle indicated by the user action need to be determined, so that the video angle which the user wants to watch is determined based on the sight line rotation direction and the sight line rotation angle, and then the accurate switching of the viewpoint is realized.
In an embodiment, when the detected user action is a sliding action triggered in the video playing interface, the gaze rotation angle indicated by the user action needs to be calculated according to the sliding direction, the sliding speed, and the sliding distance of the user action, and the gaze rotation direction indicated by the user action is the sliding direction corresponding to the sliding action.
For example, if the sliding direction of the sliding motion triggered in the video playing interface is leftward sliding, it may be determined that the gaze rotation direction is that the gaze rotates counterclockwise around the center of the scene; and if the sliding direction of the sliding action is rightward sliding, determining the sight line rotating direction as counterclockwise rotating around the center of the scene.
The larger the sliding distance of the user action on the video playing interface is, the larger the view angle which indicates that the user wants to switch is. For example, in this embodiment, a unit gaze angle value corresponding to a unit sliding distance is preset, and a gaze rotation angle corresponding to a user action can be obtained by calculating a product between the sliding distance of the user action on the video playing interface and the unit gaze angle value.
Alternatively, the sliding angle corresponding to the user motion may be determined by the sliding speed and the sliding time, and if the sliding speed is faster, the sliding distance actually indicated by the user motion is larger, and the view angle that the user wants to switch is also larger. The actual sliding distance of the user action can be obtained by calculating the product between the sliding speed and the sliding time corresponding to the user action, and the sight line rotation angle corresponding to the user action can be obtained by calculating the product between the sliding distance and the unit sight line angle value.
And when the detected user action is taken as a click action triggered in a viewpoint switching area arranged on the video playing interface, the sight line rotation angle and the sight line rotation direction indicated by the user action are consistent with the sight line rotation angle and the sight line rotation direction associated with the triggered viewpoint switching area. For example, if a video playing interface is provided with a forward viewpoint switching control and a backward viewpoint switching control, if a user clicks the forward viewpoint switching control, it indicates that the current sight line needs to be rotated counterclockwise by a preset angle; and if the user clicks the backward viewpoint switching control, indicating that the current sight line needs to be rotated clockwise by a preset angle.
It should be understood that the gaze rotation direction indicated by the sliding direction corresponding to the above sliding motion and the gaze rotation directions associated with the forward viewpoint switching control and the backward viewpoint switching control may be preset directions, and may be set correspondingly based on different layout states of multiple viewpoints relative to a scene, which is not limited in this embodiment.
And 350, generating a viewpoint switching instruction according to the sight line rotating direction and the sight line rotating angle.
Based on the gaze rotation direction and the gaze rotation angle determined in step 330, a viewpoint switching instruction may be generated to instruct to determine a second viewpoint according to the gaze rotation direction and the gaze rotation angle indicated by the viewpoint switching instruction, and to switch and play a video picture corresponding to the second viewpoint in parallel.
Therefore, based on the method provided by the embodiment, the user can freely trigger the user action on the video playing interface to perform video picture switching at different angles, that is, the playing control scheme of the multi-view video provided by the embodiment has a user interaction effect, so that the user can obtain more profound viewing experience.
In another exemplary embodiment, if a pause playing instruction is detected during the playing of the video stream data corresponding to the first viewpoint, for example, a pause playing control is triggered in the video playing interface, the playing of the video stream data corresponding to the first viewpoint is paused.
If the viewpoint switching instruction is detected after the playing of the video stream data corresponding to the first viewpoint is suspended, the content described in step 150 is executed, that is, the second viewpoint is determined according to the line-of-sight rotation direction and the line-of-sight rotation angle indicated by the viewpoint switching instruction, so as to switch to play the video stream data corresponding to the second viewpoint.
It should be noted that, because the video playing interface is still in the pause playing state, after the starting video frame in the video stream data corresponding to the second viewpoint is located and obtained, the starting video frame is switched and displayed in the video playing interface, and the video playing interface is kept in the pause playing state until the playback resuming instruction is detected, the video stream data corresponding to the second viewpoint is played from the starting video frame,
therefore, in the method provided by the embodiment, even if the video playing interface is in the pause playing state, the video pictures at multiple angles can be switched to be watched. Under the application scene of watching the ball game, the wonderful shooting action of a certain player can be watched at multiple angles under the condition of pause playing, so that the wonderful shooting action details can be watched vividly, and detailed image data can be provided for the peripheries of events such as event explanation, technical explanation and the like.
Another aspect of the present application also provides a video processing method for a multi-view video. The method may be applied to the implementation environment shown in fig. 1, for example, as specifically performed by the server 200 in the implementation environment shown in fig. 1. The method may also be applied to the video playing system shown in fig. 2, for example, specifically executed by the video server 40 shown in fig. 2.
As shown in fig. 6, in an exemplary embodiment, the method for processing multi-view video may include the steps of:
and 410, acquiring video stream data acquired based on each viewpoint, wherein each video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock, and each video stream data corresponds to the same video identifier.
In this embodiment, the video stream data collected from each viewpoint acquired by the server may be sent by the stream pusher 30 shown in fig. 2, or acquired in a data copy manner, which is not limited in this embodiment.
As described in the foregoing embodiment, the video stream data collected from each viewpoint is obtained by encoding the video source signals collected from each viewpoint based on the synchronous clock, so that the encoding timestamps added to the video frames and the audio frames collected simultaneously from different viewpoints in each video stream data are the same, and each video stream data corresponds to the same video identifier, which is the video identifier corresponding to the multi-viewpoint video.
And 430, respectively storing the video stream data acquired based on each viewpoint to obtain a video stream address corresponding to each video stream data.
In this embodiment, the storage of the video stream data collected from each viewpoint by the server may specifically be permanent storage, for example, in the video playing system shown in fig. 2, the media server 41 stores multiple paths of video stream data into the media content repository 43, or the storage of the video stream data collected from each viewpoint by the server may also be cache, which is not limited herein.
After the server stores the video stream data collected from each viewpoint, the storage address of each video stream is the video stream address corresponding to each video stream data.
Step 450, when receiving a multi-view video acquisition request sent by the video playing client, returning a video stream address set formed by video stream addresses corresponding to each video stream data to the video playing client according to the video identifier contained in the multi-view video acquisition request.
When the server receives a multi-view video acquisition request sent by the client, a video stream address set formed by video stream addresses corresponding to all video stream data is returned to the video playing client according to video identifiers contained in the multi-view video acquisition request, so that the video playing client acquires the video stream data corresponding to the multi-view video based on the received video stream address set and performs view switching in the video playing process.
Therefore, based on the storage of the video stream data corresponding to the multi-view video in the server according to the embodiment, and based on the interaction between the server and the video playing client in the embodiment, a data basis is provided for the video client to realize the multi-interaction watching of the video.
Fig. 7 is a block diagram illustrating a play control apparatus of a multi-view video according to an exemplary embodiment. As shown in fig. 7, in an exemplary embodiment, the apparatus includes a multi-view video acquisition module 510, a video play module 530, a view switching module 550, and a video play switching module 570.
The multi-view video acquiring module 510 is configured to acquire video stream data acquired based on each view according to a video stream address set corresponding to a multi-view video, where the video stream data is obtained by encoding video source signals acquired from each view based on a synchronous clock. The video playing module 530 is configured to play video stream data corresponding to the first view. The viewpoint switching module 550 is configured to determine the second viewpoint according to the viewpoint switching instruction when the viewpoint switching instruction is detected. The video playing switching module 570 is configured to locate a starting video frame in the video stream data corresponding to the second view, where the coding timestamp of the starting video frame is the same as the coding timestamp of the currently playing video frame, and switch to play the video stream data corresponding to the second view from the starting video frame.
Based on the playing control device of the multi-view video provided by this embodiment, in the video playing stage, when it is required to switch from the first view to the second view, by positioning the starting video frame with the encoding timestamp same as the currently played video frame in the video stream data corresponding to the second view, and then switching to play the video stream data corresponding to the second view from the starting video frame, it is ensured that the first frame video picture played after switching and the last frame video picture played before switching are acquired at the same time, thereby implementing seamless switching of video pictures of different views, and enabling a user to obtain very real video viewing experience.
In another exemplary embodiment, the video playback module 530 includes a frame data acquisition unit and a frame data output unit. The frame data acquisition unit is used for acquiring image data and audio sampling data acquired based on a first viewpoint according to video stream data corresponding to the first viewpoint, and the image data and the audio sampling data are synchronized based on an encoding timestamp. The frame data output unit is used for performing picture rendering on the image data to obtain a playing picture of the video stream data corresponding to the first viewpoint, and synchronously playing the audio sampling data according to the acquisition time.
The playing control device of the multi-view video provided by the embodiment interacts with the server only once to obtain video stream data collected based on a plurality of views, and when playing a video corresponding to a default view and switching to play videos corresponding to other views, the video playing end switches corresponding views based on image data and audio sampling data obtained by self analysis and decoding, and does not need to interact with the server in the whole switching process, so that time is saved, the video playing end can make a quick response to the playing of the video corresponding to the default view and the switching to play videos corresponding to other views, and the continuity of switched video pictures is further ensured.
In another exemplary embodiment, the apparatus further includes a data parsing module and a data decoding module, which are disposed between the multi-view video acquiring module 510 and the video playing module 530.
The data analysis module is used for analyzing video stream data collected based on each viewpoint to obtain video frame data and audio frame data contained in the video stream data corresponding to each viewpoint. The data decoding module is used for decoding video frame data and audio frame data contained in the video stream data corresponding to each viewpoint to obtain image data and audio sampling data collected from each viewpoint.
The playing control device for multi-view video provided by this embodiment analyzes and decodes multiple paths of video stream data respectively, so that the video data and audio sample data of each view obtained by decoding are switched quickly during view switching, and the continuity of video pictures played before and after view switching is ensured.
In another exemplary embodiment, the viewpoint switching module 550 includes an information determining unit and a viewpoint determining unit. The information determination unit is used for determining the visual line rotating direction and the visual line rotating angle indicated by the viewpoint switching instruction. The viewpoint determining unit is configured to determine a second viewpoint from the plurality of viewpoints based on the first viewpoint in accordance with the gaze rotation direction and the gaze rotation angle.
Based on the playing control device of the multi-viewpoint video provided by the embodiment, the second viewpoint is determined according to the gaze rotation direction and the gaze rotation angle indicated by the viewpoint switching instruction, and the second viewpoint that the user wants to switch can be obtained more accurately.
In another exemplary embodiment, the viewpoint determining unit includes a coordinate system constructing subunit, an angle section determining subunit, and a viewpoint positioning subunit. The coordinate system constructing subunit is used for constructing a rectangular coordinate system by taking the center of the scene as an origin. The angle interval determining subunit is used for determining corresponding coordinate points in the rectangular coordinate system according to the arrangement positions of the video collectors, wherein the coordinate points are used for representing viewpoints and determining the angle interval in which each viewpoint is located. The viewpoint positioning subunit is used for positioning a target coordinate point obtained by rotating the first viewpoint according to the sight line rotation direction and the sight line rotation angle in the rectangular coordinate system, and taking the viewpoint in the angle interval where the target coordinate point is located as a second viewpoint.
The playing control device for the multi-viewpoint video provided by this embodiment provides a way to very accurately determine the second viewpoint, specifically introduces a method of constructing a rectangular coordinate system with the center of a scene as an origin, determines the angle intervals of different viewpoints in the rectangular coordinate system, and can very accurately obtain the second viewpoint according to the angle interval corresponding to the target viewpoint that the user wishes to switch.
In another exemplary embodiment, the apparatus further includes a user action detection module, a gaze rotation information determination module, and an instruction generation module, which are disposed between the video playing module 530 and the viewpoint switching module 550.
The user action detection module is used for detecting a triggered user action in the playing process of the video stream data corresponding to the first viewpoint. The sight line rotation information determining module is used for determining the sight line rotation direction and the sight line rotation angle indicated by the user action when the user action is detected. The instruction generating module is used for generating a viewpoint switching instruction according to the sight line rotating direction and the sight line rotating angle.
Based on the play control device of the multi-view video provided by the embodiment, the user can freely trigger the user action on the video playing interface to switch the video pictures at different angles, so that the user interaction effect is provided, and the user can have more profound viewing experience.
In another exemplary embodiment, the user action comprises a slide action triggered in the video playback interface; the sight line rotation information determination module includes a slide information acquisition unit and a sight line rotation information calculation unit. The sliding information acquisition unit is used for acquiring the sliding direction and the sliding distance of the user action. The sight line rotation information calculation unit is used for calculating a sight line rotation angle according to the sliding distance, and taking the sliding direction of the sliding action as the sight line rotation direction.
Based on the play control device of the multi-viewpoint video provided by the embodiment, the user can control the video playing interface to switch and play the video pictures corresponding to different viewpoints by executing the sliding action on the video playing interface, so that the user can operate the device conveniently.
In another exemplary embodiment, the user action includes a click action triggered in a viewpoint switching area provided in the video playing interface; the sight line rotation information determining module is used for determining the sight line rotation angle and the sight line rotation direction which are associated with the viewpoint switching area as the sight line rotation direction and the sight line rotation angle which are indicated by the user action.
In another exemplary embodiment, the apparatus further includes a video request module and a message parsing module, which are provided before the multi-view video acquisition module 510.
The video request module is used for sending a multi-view video acquisition request to the server when a video playing instruction is detected, wherein the multi-view video acquisition request contains a video identifier corresponding to the multi-view video requested to be played. And the message analysis module is used for receiving a response message returned by the server aiming at the multi-view video acquisition request, and analyzing the response message to obtain a video stream address set associated with the video identifier.
Based on the playing control device of the multi-view video provided by the embodiment, the video stream addresses of the video stream data collected from each view point can be obtained from the server through interaction with the server, and the stored multi-path video stream data can be conveniently obtained from the server according to the obtained video stream addresses.
In another exemplary embodiment, the apparatus further includes a play pause module and an instruction detection module, which are specifically disposed between the video play module 530 and the viewpoint switching module 550.
The playing pause module is used for pausing the playing of the video stream data corresponding to the first viewpoint when a playing pause instruction is detected. The instruction detection module is configured to detect a viewpoint switching instruction after the video stream data corresponding to the first viewpoint is paused, and if the viewpoint switching instruction is detected, execute the content configured by the viewpoint switching module 550.
Based on the playing control device of the multi-viewpoint video provided by the embodiment, even if the video playing interface is in the pause playing state, the video pictures at multiple angles can be switched to be watched.
Another aspect of the present application also provides a video processing apparatus for multi-view video. Server as shown in fig. 8, in an exemplary embodiment, the apparatus for processing multi-view video includes a video data acquisition module 610, a video data storage module 630, and a video play response module 650.
The video data obtaining module 610 is configured to obtain video stream data collected based on each viewpoint, where each video stream data is obtained by encoding a video source signal collected from each viewpoint based on a synchronous clock, and each video stream data corresponds to a same video identifier. The video data storage module 630 is configured to store video stream data acquired based on each viewpoint, respectively, to obtain a video stream address corresponding to each video stream data. The video playing response module 650 is configured to, when receiving a multi-view video acquisition request sent by the video playing client, return a video stream address set formed by video stream addresses corresponding to each piece of video stream data to the video playing client according to a video identifier included in the multi-view video acquisition request.
The playing control device of the multi-view video provided by the embodiment can realize the storage of video stream data corresponding to the multi-view video, and can provide a data base for realizing the seamless switching of video pictures of different views in the video client based on the interaction between the device and the video playing client in the embodiment.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit execute operations has been described in detail in the method embodiment, and is not described again here.
Embodiments of the present application further provide an electronic device, including a processor and a memory, where the memory stores thereon computer readable instructions, and the computer readable instructions, when executed by the processor, implement the playing control method of the multi-view video or the video processing method of the multi-view video as described above.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
It should be noted that the electronic device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. The electronic device is also not to be construed as requiring reliance on, or necessity of, one or more components of the exemplary electronic device illustrated in fig. 9.
As shown in fig. 9, in an exemplary embodiment, the electronic device includes a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, a sensor component 807, and a communication component 808. The above components are not all necessary, and the electronic device may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 801 generally controls overall operation of the electronic device, such as operations associated with display, data communication, and log data processing. The processing component 801 may include one or more processors 809 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 801 may include one or more modules that facilitate interaction between the processing component 801 and other components. For example, the processing component 801 may include a multimedia module to facilitate interaction between the multimedia component 804 and the processing component 801.
The memory 802 is configured to store various types of data to support operation at the electronic device, examples of which include instructions for any application or method operating on the electronic device. The memory 802 stores one or more modules configured to be executed by the one or more processors 809 to perform all or part of the steps of the playing control method of the multi-view video or the video processing method of the multi-view video described in the above embodiments.
The power supply component 803 provides power to the various components of the electronic device. The power components 803 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for an electronic device.
The multimedia component 804 includes a screen that provides an output interface between the electronic device and the user. In some embodiments, the screen may include a TP (Touch Panel) and an LCD (Liquid Crystal Display). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a microphone configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. In some embodiments, the audio component 805 also includes a speaker for outputting audio signals.
The sensor assembly 807 includes one or more sensors for providing various aspects of status assessment for the electronic device. For example, the sensor assembly 807 may detect an open/closed state of the electronic device, and may also detect a temperature change of the electronic device.
The communication component 808 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a Wireless network based on a communication standard, such as Wi-Fi (Wireless-Fidelity, Wireless network).
It will be appreciated that the configuration shown in fig. 9 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 9, or have different components than shown in fig. 9. Each of the components shown in fig. 9 may be implemented in hardware, software, or a combination thereof.
Another aspect of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the method for controlling playback of a multi-view video or the method for processing a video of a multi-view video as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the playing control method of the multi-view video or the video processing method of the multi-view video provided in the above-described embodiments.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A playing control method of a multi-view video is characterized in that the multi-view video is obtained by simultaneously carrying out video acquisition on the same scene according to a plurality of video collectors arranged at different viewpoints, and the method comprises the following steps:
acquiring video stream data acquired based on each viewpoint according to a video stream address set corresponding to a multi-viewpoint video, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock;
playing video stream data corresponding to the first viewpoint;
when a viewpoint switching instruction is detected, determining a second viewpoint according to the viewpoint switching instruction;
and positioning a starting video frame in the video stream data corresponding to the second viewpoint, wherein the coding time stamp contained in the starting video frame is the same as the coding time stamp contained in the currently played video frame, and starting to switch to play the video stream data corresponding to the second viewpoint from the starting video frame.
2. The method of claim 1, wherein playing the video stream data corresponding to the first view comprises:
acquiring image data and audio sampling data acquired based on the first viewpoint according to video stream data corresponding to the first viewpoint, wherein the image data and the audio sampling data are synchronous based on an encoding timestamp;
and performing picture rendering on the image data to obtain a playing picture of the video stream data corresponding to the first viewpoint, and synchronously playing the audio sampling data according to the coding time stamp.
3. The method of claim 2, wherein before playing the video stream data corresponding to the first view, the method further comprises:
analyzing the video stream data collected based on each viewpoint to obtain video frame data and audio frame data contained in the video stream data corresponding to each viewpoint;
and decoding the video frame data and the audio frame data contained in the video stream data corresponding to each viewpoint to obtain the image data and the audio sampling data collected from each viewpoint.
4. The method of claim 1, wherein determining a second view according to the view switching instruction when the view switching instruction is detected comprises:
determining a sight line rotating direction and a sight line rotating angle indicated by the viewpoint switching instruction;
determining the second viewpoint from a plurality of viewpoints based on the first viewpoint according to the gaze rotation direction and the gaze rotation angle.
5. The method of claim 4, wherein determining the second viewpoint from the plurality of viewpoints based on the first viewpoint and according to the gaze rotation direction and the gaze rotation angle comprises:
constructing a rectangular coordinate system by taking the center of the scene as an origin;
determining corresponding coordinate points in the rectangular coordinate system according to the arrangement positions of the video collectors, wherein the coordinate points are used for representing the viewpoints and determining the angle intervals of the viewpoints;
and positioning a target coordinate point obtained by rotating the first viewpoint according to the sight line rotation direction and the sight line rotation angle in the rectangular coordinate system, and taking the viewpoint in an angle interval where the target coordinate point is located as the second viewpoint.
6. The method of claim 4, wherein after playing the video stream data corresponding to the first view, the method further comprises:
detecting a triggered user action in the playing process of the video stream data corresponding to the first viewpoint;
when the user action is detected, determining a sight line rotation direction and a sight line rotation angle indicated by the user action;
and generating the viewpoint switching instruction according to the sight line rotating direction and the sight line rotating angle.
7. The method of claim 6, wherein the user action comprises a slide action triggered in a video playback interface; when the user action is detected, determining a gaze rotation direction and a gaze rotation angle indicated by the user action, comprising:
acquiring the sliding direction and the sliding distance of the user action;
and calculating the sight line rotation angle according to the sliding distance, and taking the sliding direction of the sliding action as the sight line rotation direction.
8. The method according to claim 6, wherein the user action comprises a click action triggered in a viewpoint switching area provided in a video playing interface; when the user action is detected, determining a gaze rotation direction and a gaze rotation angle indicated by the user action, comprising:
and determining the sight line rotation angle and the sight line rotation direction associated with the viewpoint switching area as the sight line rotation direction and the sight line rotation angle indicated by the user action.
9. The method of claim 1, wherein before acquiring the video stream data collected from each viewpoint according to the video stream address set corresponding to the multi-viewpoint video, the method further comprises:
when a video playing instruction is detected, sending a multi-view video acquisition request to a server, wherein the multi-view video acquisition request contains a video identifier corresponding to the multi-view video requested to be played;
and receiving a response message returned by the server aiming at the multi-view video acquisition request, and analyzing the response message to obtain a video stream address set associated with the video identifier.
10. The method of claim 1, wherein after playing the video stream data corresponding to the first view, the method further comprises:
when a playing pause instruction is detected, pausing to play the video stream data corresponding to the first viewpoint;
and if the viewpoint switching instruction is detected after the video stream data corresponding to the first viewpoint is paused, executing the step of determining the second viewpoint according to the viewpoint switching instruction.
11. The method of claim 10, wherein after locating a starting video frame in the video stream data corresponding to the second view, the method further comprises:
switching to display the starting video frame;
and when a playing recovery instruction is detected, switching to play the video stream data corresponding to the second viewpoint from the initial video frame.
12. A video processing method of a multi-view video is characterized in that the multi-view video is obtained by simultaneously carrying out video acquisition on the same scene according to a plurality of video collectors arranged at different viewpoints, and the method comprises the following steps:
acquiring video stream data acquired based on each viewpoint, wherein the video stream data is obtained by encoding video source signals acquired from each viewpoint based on a synchronous clock, and the video stream data corresponds to the same video identifier;
respectively storing the video stream data acquired based on each viewpoint to obtain a video stream address corresponding to each video stream data;
when a multi-viewpoint video acquisition request sent by a video playing client is received, returning a video stream address set formed by video stream addresses corresponding to all the video stream data to the video playing client according to the video identification contained in the viewpoint video acquisition request.
13. A playing control device of a multi-view video is characterized in that the multi-view video is obtained by simultaneously carrying out video acquisition on the same scene according to a plurality of video collectors arranged at different viewpoints, and the device comprises:
the multi-view video acquisition module is used for acquiring video stream data acquired based on each view according to a video stream address set corresponding to a multi-view video, wherein the video stream data is obtained by encoding video source signals acquired by each view based on a synchronous clock;
the video playing module is used for playing video stream data corresponding to the first viewpoint;
the viewpoint switching module is used for determining a second viewpoint according to the viewpoint switching instruction when the viewpoint switching instruction is detected;
and the video playing switching module is used for positioning a starting video frame in the video stream data corresponding to the second viewpoint, wherein the coding time stamp of the starting video frame is the same as the coding time stamp of the currently played video frame, and switching to play the video stream data corresponding to the second viewpoint from the starting video frame.
14. An electronic device, comprising:
a memory storing computer readable instructions;
a processor to read computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-12.
CN202011006555.9A 2020-09-23 2020-09-23 Multi-view video playing control method and device, electronic equipment and storage medium Pending CN111866525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011006555.9A CN111866525A (en) 2020-09-23 2020-09-23 Multi-view video playing control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011006555.9A CN111866525A (en) 2020-09-23 2020-09-23 Multi-view video playing control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111866525A true CN111866525A (en) 2020-10-30

Family

ID=72968443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011006555.9A Pending CN111866525A (en) 2020-09-23 2020-09-23 Multi-view video playing control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111866525A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112383784A (en) * 2020-11-16 2021-02-19 浙江传媒学院 Video playing method, video transmission method and VR cluster playing system
CN112839255A (en) * 2020-12-31 2021-05-25 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN113014943A (en) * 2021-03-03 2021-06-22 上海七牛信息技术有限公司 Video playing method, video player and video live broadcasting system
CN113259764A (en) * 2021-07-06 2021-08-13 北京达佳互联信息技术有限公司 Video playing method, video playing device, electronic equipment and video playing system
CN113256491A (en) * 2021-05-11 2021-08-13 北京奇艺世纪科技有限公司 Free visual angle data processing method, device, equipment and storage medium
CN113794942A (en) * 2021-09-09 2021-12-14 北京字节跳动网络技术有限公司 Method, apparatus, system, device and medium for switching view angle of free view angle video
CN113905186A (en) * 2021-09-02 2022-01-07 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
CN114205669A (en) * 2021-12-27 2022-03-18 咪咕视讯科技有限公司 Free visual angle video playing method and device and electronic equipment
CN114390324A (en) * 2022-03-23 2022-04-22 阿里云计算有限公司 Video processing method and system and cloud rebroadcasting method
CN114513674A (en) * 2020-11-16 2022-05-17 上海科技大学 Interactive live broadcast data transmission/processing method, processing system, medium and server
WO2022111554A1 (en) * 2020-11-30 2022-06-02 华为技术有限公司 View switching method and apparatus
CN114697705A (en) * 2020-12-29 2022-07-01 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN114745597A (en) * 2022-02-11 2022-07-12 北京优酷科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN114866787A (en) * 2022-07-04 2022-08-05 深圳市必提教育科技有限公司 Live broadcast implementation method and system
WO2022193875A1 (en) * 2021-03-15 2022-09-22 腾讯科技(深圳)有限公司 Method and apparatus for processing multi-viewing-angle video, and device and storage medium
WO2022206595A1 (en) * 2021-03-31 2022-10-06 华为技术有限公司 Image processing method and related device
CN115174942A (en) * 2022-07-08 2022-10-11 叠境数字科技(上海)有限公司 Free visual angle switching method and interactive free visual angle playing system
WO2023029207A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Video data processing method, decoding device, encoding device, and storage medium
WO2023029252A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Multi-viewpoint video data processing method, device, and storage medium
CN112866669B (en) * 2021-01-15 2023-09-15 聚好看科技股份有限公司 Method and device for determining data switching time

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662693A (en) * 2008-08-27 2010-03-03 深圳华为通信技术有限公司 Method, device and system for sending and playing multi-viewpoint media content
CN102347043A (en) * 2010-07-30 2012-02-08 腾讯科技(北京)有限公司 Method for playing multi-angle video and system
CN106454401A (en) * 2016-10-26 2017-02-22 乐视网信息技术(北京)股份有限公司 Method and device for playing video
CN111355967A (en) * 2020-03-11 2020-06-30 叠境数字科技(上海)有限公司 Video live broadcast processing method, system, device and medium based on free viewpoint
CN111355966A (en) * 2020-03-05 2020-06-30 上海乐杉信息技术有限公司 Surrounding free visual angle live broadcast method and system
CN111372145A (en) * 2020-04-15 2020-07-03 烽火通信科技股份有限公司 Viewpoint switching method and system for multi-viewpoint video
CN111447461A (en) * 2020-05-20 2020-07-24 上海科技大学 Synchronous switching method, device, equipment and medium for multi-view live video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662693A (en) * 2008-08-27 2010-03-03 深圳华为通信技术有限公司 Method, device and system for sending and playing multi-viewpoint media content
CN102347043A (en) * 2010-07-30 2012-02-08 腾讯科技(北京)有限公司 Method for playing multi-angle video and system
CN106454401A (en) * 2016-10-26 2017-02-22 乐视网信息技术(北京)股份有限公司 Method and device for playing video
CN111355966A (en) * 2020-03-05 2020-06-30 上海乐杉信息技术有限公司 Surrounding free visual angle live broadcast method and system
CN111355967A (en) * 2020-03-11 2020-06-30 叠境数字科技(上海)有限公司 Video live broadcast processing method, system, device and medium based on free viewpoint
CN111372145A (en) * 2020-04-15 2020-07-03 烽火通信科技股份有限公司 Viewpoint switching method and system for multi-viewpoint video
CN111447461A (en) * 2020-05-20 2020-07-24 上海科技大学 Synchronous switching method, device, equipment and medium for multi-view live video

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112383784A (en) * 2020-11-16 2021-02-19 浙江传媒学院 Video playing method, video transmission method and VR cluster playing system
CN114513674A (en) * 2020-11-16 2022-05-17 上海科技大学 Interactive live broadcast data transmission/processing method, processing system, medium and server
WO2022111554A1 (en) * 2020-11-30 2022-06-02 华为技术有限公司 View switching method and apparatus
CN114697705A (en) * 2020-12-29 2022-07-01 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN114697705B (en) * 2020-12-29 2024-03-22 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN112839255B (en) * 2020-12-31 2021-11-02 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN112839255A (en) * 2020-12-31 2021-05-25 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN112866669B (en) * 2021-01-15 2023-09-15 聚好看科技股份有限公司 Method and device for determining data switching time
CN113014943A (en) * 2021-03-03 2021-06-22 上海七牛信息技术有限公司 Video playing method, video player and video live broadcasting system
WO2022193875A1 (en) * 2021-03-15 2022-09-22 腾讯科技(深圳)有限公司 Method and apparatus for processing multi-viewing-angle video, and device and storage medium
WO2022206595A1 (en) * 2021-03-31 2022-10-06 华为技术有限公司 Image processing method and related device
CN113256491A (en) * 2021-05-11 2021-08-13 北京奇艺世纪科技有限公司 Free visual angle data processing method, device, equipment and storage medium
CN113259764A (en) * 2021-07-06 2021-08-13 北京达佳互联信息技术有限公司 Video playing method, video playing device, electronic equipment and video playing system
CN113905186A (en) * 2021-09-02 2022-01-07 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
WO2023029204A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Free viewpoint video screen splicing method, terminal, and readable storage medium
CN113905186B (en) * 2021-09-02 2023-03-10 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
WO2023029252A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Multi-viewpoint video data processing method, device, and storage medium
WO2023029207A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Video data processing method, decoding device, encoding device, and storage medium
CN113794942A (en) * 2021-09-09 2021-12-14 北京字节跳动网络技术有限公司 Method, apparatus, system, device and medium for switching view angle of free view angle video
CN113794942B (en) * 2021-09-09 2022-12-02 北京字节跳动网络技术有限公司 Method, apparatus, system, device and medium for switching view angle of free view angle video
CN114205669B (en) * 2021-12-27 2023-10-17 咪咕视讯科技有限公司 Free view video playing method and device and electronic equipment
CN114205669A (en) * 2021-12-27 2022-03-18 咪咕视讯科技有限公司 Free visual angle video playing method and device and electronic equipment
CN114745597A (en) * 2022-02-11 2022-07-12 北京优酷科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN114390324A (en) * 2022-03-23 2022-04-22 阿里云计算有限公司 Video processing method and system and cloud rebroadcasting method
CN114866787B (en) * 2022-07-04 2022-09-23 深圳市必提教育科技有限公司 Live broadcast implementation method and system
CN114866787A (en) * 2022-07-04 2022-08-05 深圳市必提教育科技有限公司 Live broadcast implementation method and system
CN115174942A (en) * 2022-07-08 2022-10-11 叠境数字科技(上海)有限公司 Free visual angle switching method and interactive free visual angle playing system

Similar Documents

Publication Publication Date Title
CN111866525A (en) Multi-view video playing control method and device, electronic equipment and storage medium
US11381739B2 (en) Panoramic virtual reality framework providing a dynamic user experience
US10075758B2 (en) Synchronizing an augmented reality video stream with a displayed video stream
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
CN108632633B (en) Live webcast data processing method and device
CN108632632B (en) Live webcast data processing method and device
US20200388068A1 (en) System and apparatus for user controlled virtual camera for volumetric video
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN113163230B (en) Video message generation method and device, electronic equipment and storage medium
CN109698949B (en) Video processing method, device and system based on virtual reality scene
CN108635863B (en) Live webcast data processing method and device
US11282169B2 (en) Method and apparatus for processing and distributing live virtual reality content
CN112188267B (en) Video playing method, device and equipment and computer storage medium
US20170353753A1 (en) Communication apparatus, communication control method, and communication system
KR20150097609A (en) Immersion communication client and server, and method for obtaining content view
CN111698521A (en) Network live broadcast method and device
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
CN113518260B (en) Video playing method and device, electronic equipment and computer readable storage medium
CN114374853A (en) Content display method and device, computer equipment and storage medium
CN113784180A (en) Video display method, video pushing method, video display device, video pushing device, video display equipment and storage medium
CN114139491A (en) Data processing method, device and storage medium
CN113473165A (en) Live broadcast control system, live broadcast control method, device, medium and equipment
CN112188219A (en) Video receiving method and device and video transmitting method and device
KR101085718B1 (en) System and method for offering augmented reality using server-side distributed image processing
CN113660540B (en) Image information processing method, system, display method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40031395

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030