CN113891111A - Live broadcast method, device, medium and equipment for billion pixel video - Google Patents

Live broadcast method, device, medium and equipment for billion pixel video Download PDF

Info

Publication number
CN113891111A
CN113891111A CN202111149384.XA CN202111149384A CN113891111A CN 113891111 A CN113891111 A CN 113891111A CN 202111149384 A CN202111149384 A CN 202111149384A CN 113891111 A CN113891111 A CN 113891111A
Authority
CN
China
Prior art keywords
video stream
video
resolution
camera
paths
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111149384.XA
Other languages
Chinese (zh)
Other versions
CN113891111B (en
Inventor
赵月峰
袁潮
温建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202111149384.XA priority Critical patent/CN113891111B/en
Publication of CN113891111A publication Critical patent/CN113891111A/en
Application granted granted Critical
Publication of CN113891111B publication Critical patent/CN113891111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

This document relates to a method, apparatus, medium, and device for live broadcast of gigapixel video. The method is applied to the server and comprises the following steps: acquiring multiple paths of camera videos shot by an array camera, wherein each path of camera video comprises a high-resolution video stream and a low-resolution video stream; decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream; receiving a playing request of a client, wherein the playing request comprises a playing area and a playing resolution; when the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, K paths of camera videos corresponding to the playing area are determined, and the high-resolution video stream of N paths of camera videos is determined as a target video stream from the K paths of camera videos; decoding the target video stream, and replacing the picture content of the first video stream corresponding to the playing area with the picture content of the target video stream; rendering the replaced picture, generating a second video stream, and sending the second video stream to the client. The method and the device can realize the video picture of the camera with larger pixels in live broadcast under the condition of the existing decoding capability.

Description

Live broadcast method, device, medium and equipment for billion pixel video
Technical Field
This document relates to the field of live video, and in particular, to a live broadcast method, apparatus, medium, and device for billion-pixel video.
Background
In the related technology, a server acquires video streams shot by all cameras of an array camera, decodes all the video streams, then splices video frame images with the same timestamp to obtain a large canvas image with hundred million-level pixels, generates a video stream based on the large canvas image, and provides the video stream for a client to watch.
However, as the number of cameras of the array camera is increased, the resolution of the shot video is increased, and the decoding pressure of the server is required to be increased. However, the decoding capability of the video fusion server can only reach the hundred million pixel level, and when the number of cameras of the array camera exceeds the number of videos corresponding to the decoding capability of the video fusion server, the video fusion server cannot provide corresponding services.
How to support the array camera with more camera lenses under the same decoding condition by reducing the decoding pressure of the video fusion server under the prior art condition realizes the live broadcast of billion-level pixels and even higher-pixel videos, and is a technical problem to be solved.
Disclosure of Invention
To overcome the problems in the related art, a method, apparatus, medium, and device for live broadcasting of gigapixel video are provided.
According to a first aspect herein, there is provided a live method of gigapixel video, applied to a server, comprising:
acquiring multiple paths of camera videos shot by an array camera, wherein each path of camera video comprises a high-resolution video stream and a low-resolution video stream;
decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream;
receiving a playing request of a client, wherein the playing request comprises a playing area and a playing resolution;
when the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, determining K paths of camera videos corresponding to the playing area, and determining a high-resolution video stream of N paths of camera videos from the K paths of camera videos as a target video stream;
decoding the target video stream, and replacing the picture content of the first video stream corresponding to the playing area with the picture content of the target video stream;
rendering the replaced picture, generating a second video stream, and sending the second video stream to the client, wherein N is more than or equal to 1 and less than or equal to K.
Based on the above scheme, in some embodiments, the method for live broadcasting of gigapixel videos further includes:
and when the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, cutting out the picture content corresponding to the playing area from the picture content of the first video stream, generating a third video stream and sending the third video stream to the client.
Based on the above scheme, in some embodiments, the method for live broadcasting of gigapixel videos further includes:
and after the playing request of the client is received, starting a preset M-path decoder, wherein the M-path decoder is used for decoding the target video stream, and M is more than or equal to 2.
Based on the foregoing solution, in some embodiments, the determining, from the K paths of camera videos, that the high resolution video stream of the N paths of camera videos is the target video stream includes:
when K is less than or equal to M, determining the high-resolution video stream of the K paths of camera videos as a target video stream, wherein N is equal to K;
and when K is larger than M, determining that the high-resolution video stream of the M camera videos is the target video stream from the K camera videos, wherein N is equal to M.
Based on the foregoing solution, in some embodiments, when K > M, the determining, from the K paths of camera videos, that the high-resolution video stream of the M paths of camera videos is the target video stream includes:
and determining a central point of the playing area according to the coordinates of the playing area, and determining a high-resolution video stream of the M paths of camera videos with the closest distance from the central point of the video picture to the central point of the playing area in the K paths of camera videos as a target video stream.
Based on the above scheme, in some embodiments, when N ═ M, all of the M decoders decode N channels of the target video;
and when N is less than M, N paths of decoders in the M paths of decoders decode the N paths of target videos, and the rest decoders enter an idle state.
According to another aspect herein, there is provided a live device of gigapixel video, comprising:
the camera video acquisition module is used for acquiring a plurality of paths of camera videos shot by the array camera, wherein each path of camera video comprises a high-resolution video stream and a low-resolution video stream;
the first video stream fusion module is used for decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream;
the playing request receiving module is used for receiving a playing request of a client, and the playing request comprises a playing area and a playing resolution;
a target video stream determining module, configured to determine K paths of camera videos corresponding to the play area when the resolution of the first video stream corresponding to the play area is smaller than the play resolution, and determine a high-resolution video stream of N paths of camera videos from the K paths of camera videos as a target video stream;
a replacing module, configured to decode the target video stream, and replace, by using the picture content of the target video stream, the picture content of the first video stream corresponding to the playing area;
and the sending module is used for performing fusion rendering on the replaced picture, generating a second video stream and sending the second video stream to the client.
Based on the above solution, in some embodiments, the apparatus for broadcasting billion pixels video live further includes:
and the decoder starting module is used for starting a preset M-channel decoder after receiving the playing request of the client, wherein the M-channel decoder is used for decoding the target video stream, and M is more than or equal to 2.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program which, when executed, implements the steps of a live method of gigapixel video.
According to another aspect herein, there is provided a computer apparatus comprising a processor, a memory and a computer program stored on the memory, the processor when executing the computer program implementing the steps of a live method of gigapixel video.
The method comprises the steps of obtaining multiple paths of camera videos shot by an array camera, decoding low-resolution video streams in the multiple paths of camera videos, fusing and splicing the low-resolution video streams into a first video stream; after a playing request of a client is received, when the resolution of a first video stream corresponding to a playing area is smaller than the playing resolution, determining a high-resolution video stream of N paths of camera videos as a target video stream from K paths of camera videos corresponding to the playing area, replacing picture content in the first video corresponding to the playing area with the high-resolution target video stream, and then sending the first video to the client. The method can realize the video pictures of the cameras with larger pixels in live broadcast and even the high-definition video pictures of billions of pixels under the condition of the existing decoding capability.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. In the drawings:
FIG. 1 is a flow diagram illustrating a live method of gigapixel video according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a first video stream shown in accordance with an example embodiment.
FIG. 3 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment.
FIG. 4 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment.
FIG. 5 is a block diagram illustrating a computer device for live broadcast of gigapixel video, according to an example embodiment
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some but not all of the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection. It should be noted that the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
In order to support an array camera with more camera lenses under the same decoding condition by reducing the decoding pressure of a video fusion server under the prior art, the method for live broadcasting the billion-pixel video is provided.
FIG. 1 is a flow diagram illustrating a live method of gigapixel video according to an exemplary embodiment. Referring to fig. 1, the live broadcast method of gigapixel video at least includes steps S11 to S16, which are described in detail as follows:
in step S11, multiple camera videos captured by the array camera are obtained, where each camera video includes a high resolution video stream and a low resolution video stream.
The array camera comprises a plurality of cameras, the cameras are arranged according to a certain arrangement sequence, high-definition videos of a part of regions in a target view field are respectively obtained, and finally, a video fusion server fuses a plurality of paths of video streams obtained by the cameras into a high-definition video stream with a large picture and provides the high-definition video stream to a client.
For example, in the existing live broadcast technology, the resolution of a video stream shot by a single camera can reach 4K; the resolution of the display device of the general client is 4K or lower, and obviously, when the client requests to watch the content of the multiple channels of videos, the resolution after the requested multiple channels of 4K video streams are fused is far greater than the display resolution of the client, which causes the waste of the computing power of the video fusion server; moreover, due to the limitation of the decoding capability of the video fusion server, the number of video streams which can be decoded by the video fusion server is limited, and only the videos of the array cameras with the fixed number of cameras can be live broadcast without increasing the video fusion server.
Thus, herein, the server obtains multiple camera videos taken by the array camera, each including a high resolution video stream and a low resolution video stream.
In an embodiment, when each of the cameras in the array camera acquires the video content of the corresponding area, a high-resolution video stream and a low-resolution video stream with the same image content and different image resolutions may be generated simultaneously. For example, the high-resolution video stream is 4K (3840 × 2160), and the low-resolution video stream is 1080P (1920 × 1080). The server directly receives the high-resolution video stream and the low-resolution video stream of the multi-path camera video from the array camera, and the load of the server is reduced.
In an embodiment, each of the cameras in the array camera only outputs a video stream with an original resolution, and the server performs sampling processing after receiving the video stream with the original resolution to generate a low-resolution video stream corresponding to the video stream. And receiving the video stream of the array camera with the original resolution, processing the video stream of the original resolution according to the specific setting of the video stream, generating a corresponding low-resolution video stream, and reducing the configuration work of the array camera.
In practical applications, the resolution of the low-resolution video stream may be determined according to the decoding capability of the video server and the total number of paths of the camera video.
For example, the decoding capability of the video fusion server is 8K (7680 × 4320), and the number of cameras of the array camera is 16, and the resolution of the low-resolution video stream can be set to 1080P. If the number of array cameras is 36, the resolution of the low-resolution video stream may be set to 720P (1280 × 720).
And step S12, decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream.
And after receiving the multiple camera videos, the server identifies and stores all the high-resolution video streams. And decoding all the low-resolution videos, and fusing and splicing the low-resolution videos into a first video stream.
The first video stream comprises images of all the view field contents, and the low-resolution video stream is used for fusion, so that the occupation of resources such as a video fusion server decoder, a video memory and a memory can be effectively reduced.
In an embodiment, the video fusion server may provide the fused first video stream to the client, and the client displays the processed first video stream according to the performance of its own display device, and displays the content of the entire field of view, so that the client user can select an interested area. The processing performance of the client is exerted, and the pressure of the server is reduced.
In one embodiment, the video fusion server may determine the performance of the client display device based on information from the client, downsample the first video stream, and provide the downsampled first video stream to the client. The requirements of the client on the display equipment are reduced, and the transmission speed of the video data is improved.
Step S13, a play request from the client is received, where the play request includes a play area and a play resolution.
The client selects from the images of all the field contents according to the interested contents, sends the selection result to the server and requests the server to play the video in the selected area. In practical application, the length-width ratio of the client-side selection area can be limited, and the area selected by the client-side is a playing area; or determining an inscribed rectangle in the area selected by the client according to the area selected by the client, and determining a specific playing area.
The playback resolution may be the highest resolution supported by the client display device or may be a resolution specified by the client.
Step S14, if the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, determine K paths of camera videos corresponding to the playing area, and determine the high-resolution video stream of the N paths of camera videos as the target video stream from the K paths of camera videos.
Fig. 2 is a schematic diagram of a first video stream shown in accordance with an example embodiment. Referring to fig. 2, the first video stream is a video stream obtained by fusing 16 low-resolution video streams captured by the array camera. The resolution of each low-resolution video stream is 1080P (1920 × 1080). Assume that the playback area requested by the client is as shown in a in the figure, and the playback resolution of the client is 4K (3840 × 2160).
When the playing area requested by the client is a, the pixel area corresponding to the area a is 3072 × 1728, i.e. the resolution of the first video stream corresponding to the playing area is 3072 × 1728, which is smaller than the playing resolution 3840 × 2160. Obviously, the image content corresponding to the playing area in the first video stream cannot meet the requirement of the playing resolution, and in order to show clearer content to the client, the video content in the playing area needs to be replaced with the content of the high-resolution video stream.
Referring to fig. 2, the playing area a corresponds to 9 video streams, i.e., video 1, video 2, video 3, video 5, video 6, video 7, video 9, video 10, and video 11. I.e., K-9.
The high-resolution video streams corresponding to all K paths of camera videos can be used as target video streams, all contents in the playing area are replaced by all K paths of high-resolution video streams, for example, all 9 paths of video streams are decoded, merged and spliced, and then the contents corresponding to the playing area are cut out and provided to the client.
And simultaneously, the decoding of the K paths of high-resolution video streams is possible to exceed the decoding capability of the server, the server can select N paths of the K paths of high-resolution video streams, and the content in the playing area is replaced by the corresponding content in the N paths of high-resolution video streams. For example, the server supports simultaneous decoding of 4 video streams, and a high-resolution video stream corresponding to the 4 video streams can be selected to replace the content in the playing area. As shown, the high-resolution video streams corresponding to video 6, video 7, video 10, and video 11 can be selected as the target video stream.
In step S15, the target video stream is decoded, and the picture content of the first video stream corresponding to the playback area is replaced with the picture content of the target video stream.
And decoding the target video stream, and replacing the corresponding content in the playing area by using the high-resolution video stream of each path of video.
For example, referring to fig. 2, the high-resolution video streams corresponding to the video 6, the video 7, the video 10, and the video 11 and the low-resolution video streams corresponding to the video 1, the video 2, the video 3, the video 5, and the video 9 are merged and fused, and then the video stream corresponding to the playing area a is cut out from the fused video.
Or according to the playing area, the high-resolution video streams corresponding to the video 6, the video 7, the video 10 and the video 11 are cut, the cut video streams are spliced and fused, and the corresponding content in the first video stream in the playing area is replaced.
And step S16, rendering the replaced picture, generating a second video stream and sending the second video stream to the client.
And rendering the replaced picture to generate a second video stream, and sending the second video stream to the client. And displaying the video picture corresponding to the playing area on the display equipment of the client. Therefore, when the resolution of the video picture of the first video stream corresponding to the playing area requested by the client is smaller than the playing resolution requested by the client, the image content of the high-resolution video stream is used for replacing the image content in the corresponding playing area, the user experience is improved, the decoding pressure of the server is reduced, and the live broadcast of the billion pixel video is realized under the condition that the configuration of the existing server is not changed.
In an exemplary embodiment, the live broadcast method of gigapixel video further comprises: if the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, cutting out the picture content corresponding to the playing area from the picture content of the first video stream, generating a third video stream and sending the third video stream to the client.
When the playing area includes multiple paths of video content, the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, for example, the playing area includes multiple paths of video content, as shown in the playing area B in fig. 2, if the resolution of each path of low-resolution video stream is 1080P (1920 × 1080), and the playing resolution of the client is 4K (3840 × 2160), the playing device of the client can only display the picture content of the 4 paths of low-resolution video streams with the resolution of 1080P (1920 × 1080) at most, and if the area of the playing area is greater than or equal to the display area of the 4 paths of videos, it can be determined that the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, at this time, it only needs to cut the already merged first video stream, cut out the content corresponding to the playing area, generate a third video stream, and provide the third video stream to the client.
It should be noted that the actual resolution of the second video stream or the third video stream may be greater than the playing resolution of the client, and the server or the client is further required to perform downsampling processing on the second video stream or the third video stream, and adjust the resolution of the second video stream or the third video stream to the playing resolution, so that all image content corresponding to the playing area can be displayed on the display device of the client.
In an exemplary embodiment, the live broadcast method of gigapixel video further comprises:
after receiving a playing request of a client, starting a preset M-path decoder, wherein the M-path decoder is used for decoding a target video stream, and M is larger than or equal to 2.
Due to the performance limitation of the server, when the number of high-resolution video streams to be decoded is large, the decoding capability of the server is exceeded, and in order to provide better decoding service, the maximum number of decoded paths may be preset according to the decoding capability of the server, for example, 2-path, 4-path, or more decoders may be set. The present disclosure describes the predetermined number of decoder paths as M — 4 paths.
After receiving a play request from a client, starting a preset 4-way decoder. When the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, K paths of camera videos corresponding to the playing area are determined, and after the target video stream is determined from the K paths of camera videos, the target video stream can be decoded by using 4 paths of decoders which are opened in advance, so that the decoding speed is increased, the response time of a client request is shortened, and the user experience is improved. When the client changes the playing area through operations such as translation and zooming, the target video stream is changed, the target video stream can be decoded by the opened 4-channel decoder all the time, multiplexing of the 4-channel decoder is achieved, frequent switching of the decoder is not needed, after the target video stream is changed, the target video stream can be switched rapidly, the changed target video stream is decoded, the decoding speed is increased, and the watching experience of the client is improved.
In an example, the preset M-way decoder may be started after the target video stream is determined, so as to avoid the waste of the server performance caused by starting the M-way decoder when the client does not need to watch the high definition video content in the specific area.
In an exemplary embodiment, the determining the high resolution video stream of the N-channel camera video as the target video stream from the K-channel camera video in step S14 includes:
when K is less than or equal to M, determining the high-resolution video stream of the K paths of camera videos as a target video stream, wherein N is equal to K;
and when K is larger than M, determining that the high-resolution video stream of the M camera videos is the target video stream from the K camera videos, wherein N is equal to M.
If K is less than or equal to M, namely the number of camera video paths corresponding to the playing area is less than or equal to the number of decoding paths of the preset M decoders, the preset M decoders can decode high-resolution video streams of all K camera videos, at the moment, the high-resolution video streams of all K camera videos are used as target video streams, the high-resolution video streams of all camera videos corresponding to the playing area are decoded, all low-resolution video streams in the playing area are replaced, and high-definition playing of video contents in the playing area is achieved.
If K > M, the path number of the camera video corresponding to the playing area exceeds the decoding path number of the preset M paths of decoders, and the decoding capability of the preset M paths of decoders is exceeded. And selecting M paths of high-resolution video streams from the high-resolution video streams of the K paths of camera videos to serve as target video streams, decoding the selected target video streams, replacing the corresponding low-resolution video streams in the playing area, and realizing high-definition playing of part of video contents in the playing area.
In an exemplary embodiment, when N ═ M, all of the M-way decoders decode N-way target video;
and when the N is less than M, N decoders in the M decoders decode the N target videos, and the rest decoders enter an idle state.
In the live broadcasting process, if the path number of the target video stream is equal to the decoding path number of a preset decoder, decoding by using all preset M paths of decoders; if the number of the target video streams is less than the decoding number of the preset decoder, N target videos are decoded by N decoders in the M decoders, and the rest decoders enter an idle state. The preset M-channel decoder is guaranteed to be always in an open state, when the playing area is changed through operations such as translation and zooming at the client side, and the target video stream is changed, the preset M-channel decoder can decode the changed target video stream at any time, and the decoding speed is improved.
In an exemplary embodiment, when K > M, determining the high resolution video stream of the M-path camera video as the target video stream from the K-path camera video includes:
and determining a central point of the playing area according to the coordinates of the playing area, and determining a high-resolution video stream of the M paths of camera videos with the closest distance from the central point of the video picture to the central point of the playing area in the K paths of camera videos as a target video stream.
Referring to fig. 2, assuming that the current playing area is an area a, a center point of the area a may be determined according to coordinates of the playing area. The camera video corresponding to the playing area a is video 1, video 2, video 3, video 5, video 6, video 7, video 9, video 10, and video 11, and as can be seen from the figure, the distance from the center point of the picture of the video 6, video 7, video 10, and video 11 to the center point of the playing area is the closest, so when M is 4, the high-resolution video stream corresponding to the video 6, video 7, video 10, and video 11 is taken as the target video stream. Therefore, in the generated second video stream, the picture contents of the video 6, the video 7, the video 10, and the video 11 in the playing area correspond to the picture contents of the high-resolution video stream, and are displayed as high-definition images, while the picture contents of the remaining videos are still the picture contents of the low-resolution video stream.
Through the embodiment, low-resolution videos of multiple paths of camera videos are fused into a first video stream in advance, a preset M-path decoder is started, when the resolution of the first video stream corresponding to a playing area requested by a client is smaller than the playing resolution, high-resolution video streams of N paths of camera videos are determined as target video streams from K paths of camera videos corresponding to the playing area, and N is smaller than or equal to M, so that the target video streams are decoded by continuously using the opened preset M-path decoder in a live broadcasting process, image contents of the low-resolution video streams are replaced in the high-resolution videos corresponding to the target video streams, the bottleneck problem of decoding capacity of a video fusion server is solved, and live broadcasting of video pictures of a camera with larger pixels, even high-definition video pictures of eleven-level pixels is realized under the condition of the same decoding capacity.
FIG. 3 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment. Referring to fig. 3, a live device of gigapixel video includes: the video playing system comprises a camera video acquisition module 301, a first video stream fusion module 302, a playing request receiving module 303, a target video stream determining module 304, a replacing module 305 and a sending module 306.
The camera video acquisition module 301 is configured to acquire multiple camera videos captured by the array camera, where each camera video includes a high resolution video stream and a low resolution video stream.
The first video stream fusion module 302 is configured to decode all low resolution video streams of the multi-camera video and fuse and splice into a first video stream.
The play request receiving module 303 is configured to receive a play request of a client, where the play request includes a play area and a play resolution.
The target video stream determining module 304 is configured to determine K paths of camera videos corresponding to the playing area when the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, and determine a high-resolution video stream of N paths of camera videos from the K paths of camera videos as a target video stream.
The replacing module 305 is configured to decode the target video stream, and replace the picture content of the first video stream corresponding to the playing area with the picture content of the target video stream;
the sending module 306 is configured to perform fusion rendering on the replaced picture, generate a second video stream, and send the second video stream to the client.
The sending module 306 is further configured to, when the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, cut out the picture content corresponding to the playing area from the picture content of the first video stream, and generate a third video stream to send to the client.
FIG. 4 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment. Referring to fig. 4, the live broadcasting device of gigapixel video further includes: the decoder starts the module 401.
The decoder starting module 401 is configured to start a preset M-way decoder after receiving a play request from a client, where M ≧ 2.
The target video stream determination module 304 is further configured to determine a high resolution video stream of the K-way camera video as the target video stream when K ≦ M, N ≦ K;
and when K is larger than M, determining that the high-resolution video stream of the M camera videos is the target video stream from the K camera videos, wherein N is equal to M.
When K > M, determining the high-resolution video stream of the M camera videos as the target video stream from the K camera videos comprises:
and determining a central point of the playing area according to the coordinates of the playing area, and determining a high-resolution video stream of the M paths of camera videos with the closest distance from the central point of the video picture to the central point of the playing area in the K paths of camera videos as a target video stream.
FIG. 5 is a block diagram illustrating a computer device 500 for live broadcast of gigapixel video, according to an example embodiment. For example, the computer device 500 may be provided as a server. Referring to fig. 5, the computer device 500 includes a processor 501, and the number of the processors may be set to one or more as necessary. The computer device 500 further comprises a memory 502 for storing instructions, such as an application program, executable by the processor 501. The number of the memories can be set to one or more according to needs. Which may store one or more application programs. The processor 501 is configured to execute instructions to perform the live method of gigapixel video described above.
As will be appreciated by one skilled in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. A live broadcast method of billion pixel video is applied to a server and comprises the following steps:
acquiring multiple paths of camera videos shot by an array camera, wherein each path of camera video comprises a high-resolution video stream and a low-resolution video stream;
decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream;
receiving a playing request of a client, wherein the playing request comprises a playing area and a playing resolution;
when the resolution of the first video stream corresponding to the playing area is smaller than the playing resolution, determining K paths of camera videos corresponding to the playing area, and determining a high-resolution video stream of N paths of camera videos from the K paths of camera videos as a target video stream;
decoding the target video stream, and replacing the picture content of the first video stream corresponding to the playing area with the picture content of the target video stream;
rendering the replaced picture, generating a second video stream, and sending the second video stream to the client, wherein N is more than or equal to 1 and less than or equal to K.
2. A method for live broadcast of gigapixel video as claimed in claim 1, further comprising:
and when the resolution of the first video stream corresponding to the playing area is greater than or equal to the playing resolution, cutting out the picture content corresponding to the playing area from the picture content of the first video stream, generating a third video stream and sending the third video stream to the client.
3. A method of live broadcast of gigapixel video as claimed in claim 1, further comprising:
and after the playing request of the client is received, starting a preset M-path decoder, wherein the M-path decoder is used for decoding the target video stream, and M is more than or equal to 2.
4. The live broadcast method of gigapixel video as claimed in claim 3, wherein said determining a high resolution video stream of N camera videos from said K camera videos as a target video stream comprises:
when K is less than or equal to M, determining the high-resolution video stream of the K paths of camera videos as a target video stream, wherein N is equal to K;
and when K is larger than M, determining that the high-resolution video stream of the M camera videos is the target video stream from the K camera videos, wherein N is equal to M.
5. The live method of gigapixel video as claimed in claim 4, wherein when K > M, determining a high resolution video stream of M-way camera video from the K-way camera video as a target video stream comprises:
and determining a central point of the playing area according to the coordinates of the playing area, and determining a high-resolution video stream of the M paths of camera videos with the closest distance from the central point of the video picture to the central point of the playing area in the K paths of camera videos as a target video stream.
6. The method of live broadcasting of gigapixel video in claim 4,
when N is equal to M, all decoders in the M-path decoders decode the N paths of target videos;
and when N is less than M, N paths of decoders in the M paths of decoders decode the N paths of target videos, and the rest decoders enter an idle state.
7. A direct broadcast device of gigapixel video, comprising:
the camera video acquisition module is used for acquiring a plurality of paths of camera videos shot by the array camera, wherein each path of camera video comprises a high-resolution video stream and a low-resolution video stream;
the first video stream fusion module is used for decoding all low-resolution video streams in the multi-path camera video, and fusing and splicing the low-resolution video streams into a first video stream;
the playing request receiving module is used for receiving a playing request of a client, and the playing request comprises a playing area and a playing resolution;
a target video stream determining module, configured to determine K paths of camera videos corresponding to the play area when the resolution of the first video stream corresponding to the play area is smaller than the play resolution, and determine a high-resolution video stream of N paths of camera videos from the K paths of camera videos as a target video stream;
a replacing module, configured to decode the target video stream, and replace, by using the picture content of the target video stream, the picture content of the first video stream corresponding to the playing area;
and the sending module is used for performing fusion rendering on the replaced picture, generating a second video stream and sending the second video stream to the client.
8. A direct broadcast device of gigapixel video as claimed in claim 7, further comprising:
and the decoder starting module is used for starting a preset M-channel decoder after receiving the playing request of the client, wherein the M-channel decoder is used for decoding the target video stream, and M is more than or equal to 2.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-6.
10. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 1-6 are implemented when the computer program is executed by the processor.
CN202111149384.XA 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video Active CN113891111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111149384.XA CN113891111B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111149384.XA CN113891111B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Publications (2)

Publication Number Publication Date
CN113891111A true CN113891111A (en) 2022-01-04
CN113891111B CN113891111B (en) 2023-11-21

Family

ID=79007915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111149384.XA Active CN113891111B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Country Status (1)

Country Link
CN (1) CN113891111B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236508A1 (en) * 2022-06-07 2023-12-14 北京拙河科技有限公司 Image stitching method and system based on billion-pixel array camera
CN117412186A (en) * 2023-10-08 2024-01-16 上海长光以立信息科技有限公司 Video stream processing method, system, electronic equipment and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162316A (en) * 2015-04-15 2016-11-23 腾讯科技(深圳)有限公司 The data processing method of a kind of video flowing, device and system
CN106162232A (en) * 2016-06-29 2016-11-23 乐视控股(北京)有限公司 video playing control method and device
US20170118528A1 (en) * 2015-10-21 2017-04-27 Flir Commercial Systems, Inc. System and method for adaptive video streaming
US20180270486A1 (en) * 2017-03-17 2018-09-20 Samsung Electronics Co., Ltd. Method and apparatus for packaging and streaming of virtual reality (vr) media content
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN110662109A (en) * 2019-09-29 2020-01-07 咪咕动漫有限公司 Video playing method, electronic equipment and computer readable storage medium
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN111614975A (en) * 2020-05-08 2020-09-01 北京拙河科技有限公司 Method, device, medium and equipment for playing hundred million-level pixel video
US10999344B1 (en) * 2020-06-15 2021-05-04 Google Llc Dynamic video resolution and quality for improved video conferencing
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162316A (en) * 2015-04-15 2016-11-23 腾讯科技(深圳)有限公司 The data processing method of a kind of video flowing, device and system
US20170118528A1 (en) * 2015-10-21 2017-04-27 Flir Commercial Systems, Inc. System and method for adaptive video streaming
CN106162232A (en) * 2016-06-29 2016-11-23 乐视控股(北京)有限公司 video playing control method and device
US20180270486A1 (en) * 2017-03-17 2018-09-20 Samsung Electronics Co., Ltd. Method and apparatus for packaging and streaming of virtual reality (vr) media content
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN110662109A (en) * 2019-09-29 2020-01-07 咪咕动漫有限公司 Video playing method, electronic equipment and computer readable storage medium
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium
CN111614975A (en) * 2020-05-08 2020-09-01 北京拙河科技有限公司 Method, device, medium and equipment for playing hundred million-level pixel video
US10999344B1 (en) * 2020-06-15 2021-05-04 Google Llc Dynamic video resolution and quality for improved video conferencing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236508A1 (en) * 2022-06-07 2023-12-14 北京拙河科技有限公司 Image stitching method and system based on billion-pixel array camera
CN117412186A (en) * 2023-10-08 2024-01-16 上海长光以立信息科技有限公司 Video stream processing method, system, electronic equipment and medium

Also Published As

Publication number Publication date
CN113891111B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN111193937B (en) Live video data processing method, device, equipment and medium
CN113891111B (en) Live broadcasting method, device, medium and equipment of billion pixel video
KR101855540B1 (en) Content supplying device, content supplying method, content reproduction device, content reproduction method, program, and content viewing system
CN111614975B (en) Hundred million-level pixel video playing method, device, medium and equipment
CN111225228B (en) Video live broadcast method, device, equipment and medium
US10225613B2 (en) Method and apparatus for video playing processing and television
US20070107018A1 (en) Method, apparatus and system for controlling a scene structure of multiple channels to be displayed on a mobile terminal in a mobile broadcast system
CN112468832A (en) Billion-level pixel panoramic video live broadcast method, device, medium and equipment
CN105898395A (en) Network video playing method, device and system
CN111601151A (en) Method, device, medium and equipment for reviewing hundred million-level pixel video
CN110351606B (en) Media information processing method, related device and computer storage medium
CN113014943A (en) Video playing method, video player and video live broadcasting system
US20220053127A1 (en) Image Processing Method, Apparatus and System, Network Device, Terminal and Storage Medium
US20200213631A1 (en) Transmission system for multi-channel image, control method therefor, and multi-channel image playback method and apparatus
CN114189696A (en) Video playing method and device
US11157146B2 (en) Display apparatus and control method thereof for providing preview content
CN112243136B (en) Content playing method, video storage method and device
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN113891112B (en) Live broadcasting method, device, medium and equipment of billion pixel video
Bassbouss et al. Towards a high efficient 360° video processing and streaming solution in a multiscreen environment
CN112565693A (en) Method, system and equipment for monitoring video on demand
CN112925492A (en) Multimedia wall-climbing method, client and monitoring platform
CN112995752A (en) Full-view interactive live broadcast method, system, terminal and computer readable storage medium
CN115604523A (en) Processing method of free visual angle video scene, client and server
Mori et al. Efficient Zooming Without Resolution Loss in Streaming Video Using 8K Video Sources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant