CN113891112B - Live broadcasting method, device, medium and equipment of billion pixel video - Google Patents

Live broadcasting method, device, medium and equipment of billion pixel video Download PDF

Info

Publication number
CN113891112B
CN113891112B CN202111149485.7A CN202111149485A CN113891112B CN 113891112 B CN113891112 B CN 113891112B CN 202111149485 A CN202111149485 A CN 202111149485A CN 113891112 B CN113891112 B CN 113891112B
Authority
CN
China
Prior art keywords
video
resolution
target
client
canvas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111149485.7A
Other languages
Chinese (zh)
Other versions
CN113891112A (en
Inventor
赵月峰
温建伟
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202111149485.7A priority Critical patent/CN113891112B/en
Publication of CN113891112A publication Critical patent/CN113891112A/en
Application granted granted Critical
Publication of CN113891112B publication Critical patent/CN113891112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present invention relates to a live method, device, medium and equipment for billion pixel video, wherein the method is applied to a server side and comprises the following steps: acquiring all camera videos shot by an array camera, wherein each path of camera video comprises M video streams with different resolutions; fusing video streams with the same resolution in all camera videos into M layers of canvases with different resolutions, wherein M is more than or equal to 2; after receiving a live broadcast request of a client, providing relevant information of the M layers of canvases with different resolutions to the client, so that the client determines a canvas with target resolution corresponding to the selected area and the display resolution from the M layers of canvases with different resolutions according to the selected area and the display resolution, and pulls a video stream corresponding to the canvas with target resolution. The method and the system realize that the server side provides fixed computing power and reduces computing work to the client side, so that a large number of client sides are supported under the condition that the computing power of the server side is not increased.

Description

Live broadcasting method, device, medium and equipment of billion pixel video
Technical Field
This document relates to the field of live video, and more particularly to a method, apparatus, medium and device for live video of billions of pixels.
Background
In the related art, an array camera includes a plurality of cameras, and simultaneously captures multiple video, which can be spliced into billions of pixels of video. When interactive live broadcast is carried out, the client is responsible for sending the coordinates of the selected area to the server, judging which paths of videos need to be decoded by the server, and cutting, rendering, encoding and the like the videos. When a plurality of clients perform interactive viewing, the selection area of each client may be different, and the server needs to perform corresponding calculation according to the requests of different clients. However, the capabilities of the server side such as calculation and coding are limited, and when the number of the client sides reaches the upper limit of the capabilities of the server side, the capacity of the system can be expanded only by adding the server side. The traditional video live broadcast method has high cost and is not suitable for live broadcast application of billions of pixel videos of a large number of user scenes.
Disclosure of Invention
To overcome the problems in the related art, provided herein is a live method, apparatus, medium, and device for billion pixel video.
According to a first aspect of the present invention, there is provided a live method of billion-pixel video, applied to a server, including:
acquiring all camera videos shot by an array camera, wherein each path of camera video comprises M video streams with different resolutions, and M is more than or equal to 2;
Fusing video streams with the same resolution in all camera videos into M layers of canvases with different resolutions;
and after receiving a live broadcast request of a client, providing related information of the M layers of canvases with different resolutions to the client, so that the client determines the canvases with target resolutions corresponding to the selection area and the display resolution in the M layers of canvases with different resolutions according to the selection area and the display resolution, and pulls video streams corresponding to the canvases with target resolutions and the selection area.
Based on the foregoing, in some embodiments, the live method of billion pixel video further comprises:
dividing the canvas of the M layers with different resolutions by using the same dividing rule, dividing each canvas into N blocks, wherein N is more than or equal to 1, numbering each block, and each block corresponds to multiple paths of camera videos;
and storing video streams with different resolutions of the multi-path camera video corresponding to each block in one or more designated servers, and establishing the corresponding relation between the block numbers and the video streams with different resolutions and the storage servers.
Based on the foregoing, in some embodiments, after receiving the play request of the client, the method further includes:
And providing the segmentation rule to the client.
According to another aspect herein, there is provided a live method of billion pixel video, for use on a client, comprising:
a live broadcast request is sent to a server to acquire relevant information of M layers of canvases with different resolutions;
the client determines canvas with target resolution corresponding to the selection area and the display resolution in the canvas with M layers of different resolutions according to the selection area and the display resolution;
determining a plurality of paths of camera videos corresponding to the selected area and a target video stream corresponding to canvas of the target resolution in the plurality of paths of camera videos;
pulling the target video stream;
and splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
Based on the foregoing, in some embodiments, after the live broadcast request is sent to the server, the live broadcast method of the billion pixel video further includes:
and obtaining a segmentation rule from the server, wherein the segmentation rule comprises block numbers and the corresponding relation between video streams with different resolutions and a storage server.
Based on the foregoing, in some embodiments, the determining the multiple paths of camera videos corresponding to the selection area, and the target video stream corresponding to the canvas of the target resolution in the multiple paths of camera videos includes:
Determining the number of the block corresponding to the selected area;
determining a target video stream in the block according to the block and the target resolution;
inquiring the segmentation rule by the root to determine a target server where the target video stream is located;
and pulling the target video stream from the target server.
According to another aspect herein, there is provided a live video device for billion pixels, applied to a server, including:
the array video acquisition module is used for acquiring all camera videos shot by the array camera, wherein each path of camera video comprises M video streams with different resolutions;
the canvas fusion module is used for fusing video streams with the same resolution in all camera videos into M layers of canvases with different resolutions;
and the response module is used for providing the relevant information of the M layers of canvases with different resolutions to the client after receiving the live broadcast request of the client, so that the client can determine the canvases with target resolutions corresponding to the selection area and the display resolution in the M layers of canvases with different resolutions according to the selection area and the display resolution, and pull the video streams corresponding to the canvases with target resolutions and the selection area.
According to another aspect herein, there is provided a live device of billion pixel video for use on a client, comprising:
the request module is used for sending a live broadcast request to the server and acquiring related information of M layers of canvases with different resolutions;
the canvas selection module is used for determining a canvas with target resolution corresponding to the selection area and the display resolution from the canvases with different resolutions of the M layers according to the selection area and the display resolution by the client;
a target video stream determining module, configured to determine multiple paths of camera videos corresponding to the selection area, and a target video stream corresponding to the canvas of the target resolution in the multiple paths of camera videos;
the pulling module is used for pulling the target video stream;
and the rendering module is used for splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program which when executed performs the steps of a live method of billion pixel video.
According to another aspect herein, there is provided a computer apparatus comprising a processor, a memory and a computer program stored on the memory, the processor implementing the steps of a live method of billion pixel video when the computer program is executed.
According to the method, video streams with different resolutions of a plurality of camera videos of an array camera are acquired through a server, multiple layers of canvases with different resolutions are established, after a client live broadcast request is received, relevant information of the multiple layers of canvases with different resolutions is provided for a client, so that the client can determine canvases with target resolutions corresponding to a selection area and a display resolution in the multiple layers of canvases with different resolutions according to the selection area and the display resolution, and pull video streams corresponding to the canvases with the target resolutions and the selection area, calculation force can be lowered to the client, and fixed calculation force can be achieved by the server, so that a large number of clients can be supported under the condition that calculation force of the server is not increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the disclosure, and do not constitute a limitation on the disclosure. In the drawings:
fig. 1 is a flow chart illustrating a method of live video of billions of pixels, according to an example embodiment.
FIG. 2 is a schematic diagram of an array camera image shown according to an exemplary embodiment.
FIG. 3 is a diagram illustrating segmentation of a canvas according to an exemplary embodiment.
Fig. 4 is a flow chart illustrating a method of live video of billions of pixels, according to an example embodiment.
Fig. 5 is a block diagram of a live device of billion pixel video, shown according to an example embodiment.
Fig. 6 is a block diagram of a live device of billion pixel video, shown according to an example embodiment.
FIG. 7 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments herein more apparent, the technical solutions in the embodiments herein will be clearly and completely described below with reference to the accompanying drawings in the embodiments herein, and it is apparent that the described embodiments are some, but not all, embodiments herein. All other embodiments, based on the embodiments herein, which a person of ordinary skill in the art would obtain without undue burden, are within the scope of protection herein. It should be noted that, without conflict, the embodiments and features of the embodiments herein may be arbitrarily combined with each other.
In the related art, an array camera includes a plurality of cameras, and simultaneously captures multiple video, which can be spliced into billions of pixels of video. When interactive live broadcast is carried out, the client is responsible for sending the coordinates of the selected area to the server, judging which paths of videos need to be decoded by the server, and cutting, rendering, encoding and the like the videos. When a plurality of clients perform interactive viewing, the selection area of each client may be different, and the server needs to perform corresponding calculation according to the requests of different clients. However, the capabilities of the server side such as calculation and coding are limited, when the number of the client sides reaches the upper limit of the capabilities of the server side, the system capacity expansion can only be performed by adding the server side, the cost is high, and the method is not suitable for live broadcast application of billions of pixel videos in a large number of user scenes.
To address the above issues, a live method of billion pixel video is provided herein. Fig. 1 is a flow chart illustrating a method of live video of billions of pixels, according to an example embodiment. Referring to fig. 1, the live broadcasting method of billion pixel video is applied to a server and at least includes steps S11 to S13, and is described in detail below.
And S11, acquiring all camera videos shot by the array camera, wherein each path of camera video comprises M video streams with different resolutions, and M is more than or equal to 2.
The array camera comprises a plurality of cameras which are arranged according to a certain sequence, video images of different areas in a target view field are respectively acquired, the video images of different areas acquired by different cameras are arranged according to the sequence of the cameras, and the video images of billions of pixels or higher pixels can be spliced and fused.
Herein, all camera videos shot by the array camera are acquired by the server. The server may be one server or a cluster formed by a plurality of servers. In practical applications, the number of servers may be determined according to the service capability of each server and the number of videos to be processed.
Each camera in the array camera acquires a path of camera video.
In an example, a single camera in the array camera may be a camera with encoding capability that encodes a video stream of an original resolution as well as encoding into a plurality of video streams of different resolutions based on the video of the original resolution. For example, the video stream of the original resolution is 4K (3840×2160), and the video stream of the original resolution is encoded into 2K (2560×1440) video stream and 1080P (1920×1080) video stream. Thus, in the camera video of each camera, a plurality of video streams with different resolutions such as 4K, 2K, 1080P and the like are included. The type of resolution of the original resolution video stream to be encoded, or the number of resolution video streams to be encoded, needs to be determined according to the specific application scenario, and is not limited herein.
In an example, a single camera in the array camera may be a camera without encoding capability, only output a video stream with an original resolution, and the server receives the video stream with the original resolution and encodes the video stream into multiple video streams with different resolutions.
Thus, the server can store a plurality of video streams with different resolutions of each camera video, the plurality of video streams with different resolutions, the displayed picture content is the same, and the resolutions are different. And simultaneously, the obtained video streams with different resolutions of different camera videos can be marked.
For example, cameras in an array are numbered according to the location in the field of view of the image content acquired by each camera. FIG. 2 is a schematic diagram of an array camera image shown according to an exemplary embodiment. Referring to fig. 2, taking an example in which the array camera includes 12 cameras, 12-way camera video is acquired by the 12 cameras. Each path of camera video may be numbered according to the corresponding camera position, as shown in figures 1-12. For the first path of camera video, the server encodes the video stream with the original resolution of the original video acquired by the camera to obtain the following 3 resolution video streams: a video stream of 4K (3840×2160) resolution, a video stream of 1-1,2K (2560×1440) resolution, a video stream of 1-2, 1080P (1920×1080) resolution, and a video stream of 1-3. Similarly, for the second path of camera video, the video streams with three resolutions are identified as 2-1, 2-2 and 2-3 respectively. And so on, up to camera video 12, and will not be discussed further herein.
And step S12, fusing video streams with the same resolution in all camera videos into M canvases with different resolutions.
Taking the above 12 paths of camera videos as an example, the server performs timestamp synchronization matching on all video streams, and decodes and fusion renders video streams with the same resolution in the 12 paths of camera videos. Merging video images identified as 1-1, 2-1, … … -1 with the same timestamp in a 4K (3840×2160) resolution video stream into one Zhang Huabu, and may be identified as a first layer canvas; similarly, video images with the same time stamp in the video streams with the resolutions of 2K (2560 multiplied by 1440) and identified as 1-2, 2-2 and … … 12-2 are merged into one Zhang Huabu and can be identified as a second layer canvas; thus, 3 three layers of canvases corresponding to 4K, 2K and 1080P can be obtained.
In practical application, the frame images of the latest time stamp of the multipath video stream can be fused in real time, and the fused canvas is updated in real time, so that the canvas content is the latest video content. The frame images with the same time stamp in each video stream can be collected periodically for fusion, canvas content is updated periodically, the load of a server is reduced, and the fusion capability is improved.
It will be appreciated by those skilled in the art that decoding and fusion rendering of multiple video streams may be accomplished cooperatively using multiple servers. For example, a video stream of 4K (3840×2160) resolution is decoded and rendered by one or more servers, and a video stream of 2K (2560×1440) resolution and 1080P (1920×1080) resolution is decoded and rendered by one or more other servers. The system can be flexibly deployed according to the performance of the existing server.
And S13, after receiving a live broadcast request of a client, providing related information of M canvases with different resolutions to the client, so that the client determines a canvas with target resolution corresponding to the selected area and the display resolution in the M canvases with different resolutions according to the selected area and the display resolution, and pulls video streams corresponding to the canvas with target resolution and the selected area.
After receiving the live broadcast request of the client, the server can provide the relevant information of the converged multiple layers of canvases with different resolutions to the client. The relevant information for the canvas may include the pixel size of the canvas at different resolutions, how many camera videos the canvas is stitched from, the location of each camera video in the canvas, the resolution of the camera video. The method can also comprise the image content of all the areas, for example, after the frame images corresponding to the latest time stamps of all the video streams are fused, thumbnail images are formed and provided for the client, and the client can select the interested areas.
The client may select a resolution canvas to accommodate based on the selection area and the display resolution of the client display device or the display resolution specified by the client.
For example, the client obtains the 3-layer canvas as shown in fig. 2 from the server, selects the area A1, and calculates the resolution of the selected area A1 corresponding to the 3-layer canvas. If the resolution corresponding to the area A1 in the third layer canvas with the lowest resolution is larger than or equal to the display resolution of the client, determining the third layer canvas as the canvas with the target resolution; if the resolution corresponding to the area A1 in the third layer of canvas is smaller than the display resolution of the client, the resolution corresponding to the area A1 in the second layer of canvas is further compared, and if the resolution corresponding to the area A1 in the second layer of canvas is larger than or equal to the display resolution of the client, the canvas with the target resolution of the second layer of canvas is determined; if the resolution corresponding to the area A1 in the second layer of canvas is smaller than the display resolution of the client, determining the first layer of canvas as the canvas with the target resolution, and even if the resolution corresponding to the area A1 in the first layer of canvas is still smaller than the display resolution of the client. Of course, the resolution corresponding to the area A1 in each layer of canvas may be determined layer by layer from the first layer of canvas, and when the resolution corresponding to A1 is selected to be greater than the client display resolution and closest to the client display resolution, the corresponding canvas is the canvas with the target resolution.
Similarly, when M is greater than 3, still starting from the canvas with the lowest or highest resolution, judging whether the resolution corresponding to the selected area in the canvases with the resolutions is greater than or equal to the display resolution of the client side, and is closest to the display resolution of the client side, if so, taking the corresponding canvases as canvases with the target resolution.
After the canvas of the target resolution is determined, the video stream of the resolution corresponding to the canvas of the target resolution and the selection area can be pulled by the client, and operations such as decoding, fusing, rendering and the like can be performed at the client. For example, in the embodiment shown in fig. 2, if the canvas of the target resolution is a 2K second layer canvas, the client may pull the 2K video streams 1-2, 2-2, 3-2, 5-2, 6-2, 7-2, 9-2, 10-2, 11 of the videos 1, 2, 3, 5, 6, 7, 9, 10, 11 corresponding to the selection area A1, decode, fuse, and render the pulled 9 paths of video streams to the display device.
In this embodiment, the server provides fixed resources, that is, the computing power of the server is fixed, and the computing is downloaded to the client, so that when the number of clients increases, the resource occupation of the server is not increased, and the support of a large number of clients is realized by the server under the condition of no capacity expansion.
If the number of cameras in the array camera is large, video streams with different resolutions need to be cooperatively decoded by a plurality of servers and stored in the plurality of servers, and a client needs to pull the video streams with the resolution corresponding to the target resolution to the plurality of servers when pulling the video streams, the client needs to know in advance in which server each video stream with the resolution in each path of camera video is stored.
In an exemplary embodiment, the live method of billion pixel video further comprises: dividing the canvas of M layers with different resolutions by using the same dividing rule, dividing each canvas into N blocks, numbering each block, and enabling N to be more than or equal to 1, wherein each block corresponds to multiple paths of camera videos;
and storing video streams with different resolutions of the multi-path camera video corresponding to each block in one or more designated servers, and establishing corresponding relations among the block numbers, the video streams with different resolutions and the storage servers.
Dividing the canvas into a plurality of blocks according to the video number included in the canvas, wherein each block corresponds to multiple paths of camera videos, and setting a number for each block. Video streams of different resolutions in the multi-path camera video corresponding to each area can be stored in one or more designated servers.
And dividing the multi-layer canvas by using the same dividing rule, and establishing corresponding relations between the video streams with the block numbers and different resolutions and the storage server. It is possible to make clear which of the camera videos each region number corresponds to, and in which server the video streams of different resolutions in each of the camera videos are stored. When a client initiates a live broadcast request or the client changes a selection area through operations such as translation, scaling and the like, a target server can be rapidly determined, a corresponding video stream is pulled, and response speed is improved.
In an exemplary embodiment, the live broadcasting method of the billion pixel video, after receiving the playing request of the client, further includes:
providing the segmentation rules to the client.
After the client initiates the live broadcast request, the server provides the relevant information of the multi-layer canvas for the client, and simultaneously provides the segmentation rules for the client. After the client determines the canvas of the target resolution, inquiring the segmentation rule according to the blocks related to the selected area, determining the video stream and the storage position corresponding to the block numbers, connecting to the server corresponding to the storage position, and pulling the video stream.
FIG. 3 is a diagram illustrating segmentation of a canvas according to an exemplary embodiment. Referring to fig. 3, still taking a 12-way camera video as an example, each camera video includes a video stream of 4K (3840×2160) resolution, a video stream of 2K (2560×1440) resolution, and a video stream of 1080P (1920×1080) resolution. The canvas is divided, 1, 5 and 9 paths of camera videos are divided into B1 blocks, 2, 6 and 10 paths of camera videos are divided into B2 blocks, 3, 7 and 11 paths of camera videos are divided into B3 blocks, and 4, 8 and 12 paths of camera videos are divided into B4 blocks in FIG. 3. When the canvas is divided, the sizes of all the blocks can be the same or different, and the canvas can be flexibly divided according to actual requirements, so that the same division rule is adopted for dividing the multi-layer canvas.
The whole inquiry and calculation process is completed by the client without participation of the server, so that the fixed calculation force of the server is realized, and the resource occupation of the server is not increased when the number of the clients is increased.
Fig. 4 is a flow chart illustrating a method of live video of billions of pixels, according to an example embodiment. Referring to fig. 4, the live broadcasting method of billion pixel video is applied to a client, and at least includes steps S41 to S45, and is described in detail as follows:
and step S41, a live broadcast request is sent to the server to acquire the related information of the canvas of M layers with different resolutions. The client sends a live broadcast request for acquiring the target video to the server, and acquires relevant information of M layers of canvases with different resolutions of the target video from the server. The M layers of canvases with different resolutions are obtained by fusing video streams with different resolutions of a plurality of camera videos by a server.
In step S42, the client determines, from the canvas of M layers of different resolutions, a canvas of a target resolution corresponding to the selection area and the display resolution, according to the selection area and the display resolution.
The client determines a resolution canvas to accommodate in the M-layer canvas based on the selection region and the display resolution. The selection area is selected by the client in the canvas, and the display resolution of the client may be determined according to the resolution of the display device of the client or may be specified by the client.
For example, the client obtains the 3-layer canvas as shown in fig. 2 from the server, and selects the area A1, and calculates the resolution of the selected area A1 corresponding to the 3-layer canvas, respectively. If the resolution corresponding to the area A1 in the third layer canvas with the lowest resolution is larger than or equal to the display resolution of the client, determining the third layer canvas as the canvas with the target resolution; if the resolution corresponding to the area A1 in the third layer of canvas is smaller than the display resolution of the client, the resolution corresponding to the area A1 in the second layer of canvas is further compared, and if the resolution corresponding to the area A1 in the second layer of canvas is larger than or equal to the display resolution of the client, the second layer of canvas is determined to be the canvas with the target resolution; and if the resolution corresponding to the area A1 in the first-layer canvas is still smaller than the display resolution of the client, determining the first-layer canvas as the canvas with the target resolution. When M is greater than 3, still starting from the canvas with the lowest resolution, judging whether the resolution corresponding to the selected area in the canvases with the resolutions is greater than or equal to the display resolution of the client side layer by layer, and if so, taking the canvas of the current layer as the canvas with the target resolution. How to determine the canvas of the target resolution is described above and will not be further described herein.
Step S43, determining a plurality of paths of camera videos corresponding to the selected area and a target video stream corresponding to canvas of target resolution in the plurality of paths of camera videos.
The selection area can be a partial area in the single-path camera video or an area formed by multiple paths of camera videos, and the corresponding multiple paths of camera videos can be determined according to the selection area. The video stream of the target resolution in the selected multi-path camera video can be further determined as the target video stream according to the canvas of the target resolution.
Step S44, pulling.
After the target video stream is determined, it can be pulled by the client to be local to the client.
And step S45, splicing and fusing the target video stream according to the position in the selection area, and rendering the target video stream to the display device after cutting.
And finally decoding the pulled video stream by the client, splicing and fusing the video streams according to the positions of the video streams with the target resolutions in the selection area, cutting the video streams to obtain the video stream with the target resolution in the selection area, and rendering the video stream to the display device.
The server only provides video streams with fixed resolution and fixed road number for the client to pull. After the client pulls the multiple video streams corresponding to the selection area and the resolution, the client decodes, splices, renders and the like the multiple video streams, so that the calculation is downloaded to the client, the resource occupation of the server is not increased when the number of the terminal users is increased, and the support of a large number of clients is realized under the condition of not expanding the server.
In an exemplary embodiment, the live method of billion pixel video further includes, after the client sends the live request to the server:
and obtaining a segmentation rule from the server, wherein the segmentation rule comprises block numbers and the corresponding relation between video streams with different resolutions and the storage server.
In order to accelerate the acquisition speed of the video stream with the target resolution corresponding to the selected area and the resolution, after the server segments the canvas, if the client sends a live broadcast request to the server, the client acquires the relevant information of the multi-layer canvas from the server, and also acquires the segmentation rule, wherein the segmentation rule comprises the corresponding relation between the block numbers and the video streams with different resolutions and the storage server. The client can quickly determine the target video stream according to the blocks related to the selected areas, quickly determine the address of the server stored by the target video stream according to the corresponding relation, request the target video stream from the server, and finish the query work at the client so as to further reduce the resource consumption of the server.
In an exemplary embodiment, in step S43, determining a plurality of camera videos corresponding to the selected area, and a target video stream corresponding to a canvas of a target resolution among the plurality of camera videos includes:
Determining the number of the block corresponding to the selected area;
determining a target video stream in the block according to the block and the target resolution;
inquiring a segmentation rule, and determining a target server where a target video stream is located;
the target video stream is pulled from the target server.
For example, taking fig. 3 as an example, the client can determine that the block corresponding to the selected area is the block B2 and the block B3 according to the selected area A2, and if the resolution specified by the client is 2K, the target resolution is 2K. It is thus possible to determine the 2K resolution video stream of the camera videos 2,3, 6, 7 as the target video stream, video streams 2-2,3-2,6-2,7-2, respectively. Inquiring the segmentation rule, according to the block numbers in the segmentation rule and the corresponding relation between the video streams with different resolutions and the storage server, the target server where the target video stream is located can be rapidly determined, and the target video stream can be pulled by connecting the target server, so that the display speed of the client is increased.
For a better understanding of the live method of eleven-level pixels provided herein, an example is illustrated.
Referring to fig. 2 and 3, a server cluster acquires videos of a plurality of cameras from an array camera, wherein the array camera is composed of 12 cameras, 12 paths of camera videos are shot in total, and each path of camera videos is numbered according to the positions of the cameras, and the number is 1-12. When each camera shoots video, the video stream of the original resolution is encoded into a 2K (2560×1440) video stream and a 1080P (1920×1080) video stream, except that the video stream of the original resolution is 4K (3840×2160). I.e. each path of camera video comprises 3 video streams of different resolutions. The video streams with three resolutions are numbered respectively, and for the video stream with the resolution of 4K (3840 multiplied by 2160), the video stream with the resolution of 1-1,2K (2560 multiplied by 1440), the video stream with the resolution of 1-2 and the video stream with the resolution of 1080P (1920 multiplied by 1080) in the first path of camera video, the video stream with the resolution of 1-3 are identified. Similarly, in the second path of camera video, the video streams with three resolutions are identified as 2-1, 2-2 and 2-3 respectively. And so on, up to camera video 12, and will not be discussed further herein.
After receiving the 36 paths of video streams, the server splices and merges 12 video streams with the original resolution of 4K (3840 multiplied by 2160) according to the corresponding camera positions to form a first canvas; merging 12 paths of video streams with resolution of 2K (2560 multiplied by 1440) into a second layer canvas; 12-way video streams with resolution of 1080P (1920 x 1080) are converged into a third-layer canvas.
Dividing 12 paths of camera videos in each canvas layer, dividing 1, 5 and 9 paths of camera videos into B1 blocks, dividing 2, 6 and 10 paths of camera videos into B2 blocks, dividing 3, 7 and 11 paths of camera videos into B3 blocks, dividing 4, 8 and 12 paths of camera videos into B4 blocks, and storing 36 paths of video streams to a designated server. As shown in table 1.
Table 1:
and recording the corresponding relation between the block numbers and video streams with different resolutions and the storage server through a table 1, and providing relevant information and segmentation rules of the 3-layer canvas for the client after receiving the live broadcast request of the client.
If the selection area of the client is an A2 area, the display resolution is 2K. After the client acquires three layers of canvas from the server, starting from the third layer of canvas with the lowest resolution, judging whether the resolution corresponding to the selected area in each layer of canvas is greater than or equal to the display resolution of the client display device, and assuming that the second layer of canvas meets the conditions. The client can determine video streams marked as 2-2 and 6-2 in the server 2 corresponding to the pulling block 2 and video streams marked as 3-2 and 7-2 in the server 3 corresponding to the block 3 according to the segmentation rule, and perform splicing and fusion after decoding the 4 paths of video streams, and render and display the video streams in the display equipment.
Through the above embodiment, the server side obtains video streams with different resolutions of the multiple camera videos of the array camera, establishes multiple layers of canvases with different resolutions and segmentation rules, and stores the video streams with different resolutions of the multiple paths of videos of the array camera into one or more designated servers. When the client side performs a live broadcast request, related information and segmentation rules of the multi-layer canvas are acquired from the server side, the canvas with target resolution is determined based on the selection area and the display resolution of the client side, and then a plurality of target video streams corresponding to the canvas with target resolution are further determined. And determining corresponding block numbers according to the segmentation rules, quickly inquiring the storage position of the target video according to the block numbers, pulling the target video stream, and fusing and rendering the target video stream to the client display equipment. Therefore, the service capacity of each server can be integrated, and video content with fixed computing power can be provided for the client; the client selects a target video stream with corresponding resolution according to the performance of the client, pulls the target video stream to be spliced and fused, renders the target video stream to the display equipment, and the calculation work is completed by the client so as to support a large number of clients under the condition that the calculation power of the server is unchanged.
Fig. 5 is a block diagram of a live device of billion pixel video, shown according to an example embodiment. Referring to fig. 5, a live device for billion-pixel video is applied to a server, and includes: an array video acquisition module 501, a canvas convergence module 502, a response module 503.
The array video acquisition module 501 is configured to acquire all camera videos captured by an array camera, wherein each path of camera video includes M video streams of different resolutions.
The canvas convergence module 502 is configured to converge video streams of the same resolution in all camera videos into M layers of canvases of different resolutions.
The response module 503 is configured to provide relevant information of the M layers of canvases with different resolutions to the client after receiving the live broadcast request of the client, so that the client determines a canvas with a target resolution corresponding to the selection area and the display resolution from the M layers of canvases with different resolutions according to the selection area and the display resolution, and pulls the video stream corresponding to the canvas with the target resolution and the selection area.
In an exemplary embodiment, the canvas fusing module 502 is further configured to segment the M layers of canvases of different resolutions using the same segmentation rule, segment each canvas into N blocks, and number each block, each block corresponding to a multi-way camera video;
And storing video streams with different resolutions of the multi-path camera video corresponding to each block in one or more designated servers, and establishing corresponding relations among the block numbers, the video streams with different resolutions and the storage servers.
In an exemplary embodiment, the response module 503 is further configured to provide the segmentation rule to the client after receiving the play request of the client.
Fig. 6 is a block diagram of a live device of billion pixel video, shown according to an example embodiment. Referring to fig. 6, a live device of billion pixel video is applied to a client, comprising: a request module 601, a canvas selection module 602, a target video stream determination module 603, a pull module 604, and a rendering module 605.
The request module 601 is configured to send a live broadcast request to a server to obtain canvas related information of M layers with different resolutions.
The canvas selection module 602 is configured for a client to determine a canvas of a target resolution corresponding to a selection region and a display resolution from among the canvas of M layers of different resolutions according to the selection region and the display resolution.
The target video stream determination module 603 is configured to determine a multi-path camera video corresponding to the selected region and a target video stream corresponding to a canvas of a target resolution in the multi-path camera video.
The pull module 604 is configured to pull the target video stream.
The rendering module 605 is configured to splice and fuse the target video stream according to the position in the selection area, and render the target video stream to the display device after clipping.
In an exemplary embodiment, the request module 601 is further configured to obtain a segmentation rule from the server after sending the live broadcast request to the server, where the segmentation rule includes a block number, and a correspondence between video streams with different resolutions and a storage server.
In an exemplary embodiment, the target video stream determination module 603 is further configured to:
determining the number of the block corresponding to the selected area;
determining a target video stream in a block according to the block and the target resolution;
inquiring a segmentation rule by the root to determine a target server where a target video stream is located;
the target video stream is pulled from a target server.
Fig. 7 is a block diagram of a computer device 700 for billion pixel live video, according to an example embodiment. For example, the computer device 700 may be provided as a server. Referring to fig. 7, a computer device 700 includes a processor 701, the number of which may be set to one or more as needed. The computer device 700 also includes a memory 702 for storing instructions, such as application programs, that are executable by the processor 701. The number of the memories can be set to one or more according to the requirement. Which may store one or more applications. The processor 701 is configured to execute instructions to perform the method of billion pixel live video described above.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in an article or apparatus that comprises the element.
While preferred embodiments herein have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all alterations and modifications as fall within the scope herein.
It will be apparent to those skilled in the art that various modifications and variations can be made herein without departing from the spirit and scope of the disclosure. Thus, given that such modifications and variations herein fall within the scope of the claims herein and their equivalents, such modifications and variations are intended to be included herein.

Claims (10)

1. A live method of billion pixel video, applied to a server, comprising:
acquiring all camera videos shot by an array camera, wherein each path of camera video comprises M video streams with different resolutions, and M is more than or equal to 2;
fusing video streams with the same resolution in all camera videos into M layers of canvases with different resolutions;
after receiving a live broadcast request of a client, providing relevant information of the M layers of canvases with different resolutions to the client, so that the client determines a canvas with target resolution corresponding to a selection area and a display resolution in the M layers of canvases with different resolutions according to the selection area and the display resolution, and pulls video streams corresponding to the canvas with target resolution and the selection area; and splicing and fusing the pulled video stream according to the position in the selection area, and rendering the video stream to a display device after cutting.
2. A method of live video of billions of pixels as in claim 1 further comprising:
dividing the canvas of the M layers with different resolutions by using the same dividing rule, dividing each canvas into N blocks, wherein N is more than or equal to 1, numbering each block, and each block corresponds to multiple paths of camera videos;
and storing video streams with different resolutions of the multi-path camera video corresponding to each block in one or more designated servers, and establishing the corresponding relation between the block numbers and the video streams with different resolutions and the storage servers.
3. The method for live video billion pixels of claim 2, wherein after receiving the play request from the client, further comprising:
and providing the segmentation rule to the client.
4. A live method of billion pixel video, for application to a client, comprising:
a live broadcast request is sent to a server to acquire relevant information of M layers of canvases with different resolutions;
the client determines canvas with target resolution corresponding to the selection area and the display resolution in the canvas with M layers of different resolutions according to the selection area and the display resolution;
Determining a plurality of paths of camera videos corresponding to the selected area and a target video stream corresponding to canvas of the target resolution in the plurality of paths of camera videos;
pulling the target video stream;
and splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
5. The method for live broadcasting of a billion pixel video of claim 4, wherein the sending the live broadcasting request to the server further comprises:
and obtaining a segmentation rule from the server, wherein the segmentation rule comprises block numbers and the corresponding relation between video streams with different resolutions and a storage server.
6. The method of claim 5, wherein the determining a plurality of camera videos corresponding to the selected region and a target video stream of the plurality of camera videos corresponding to the canvas of the target resolution comprises:
determining the number of the block corresponding to the selected area;
determining a target video stream in the block according to the block and the target resolution;
inquiring the segmentation rule, and determining a target server where the target video stream is located;
And pulling the target video stream from the target server.
7. A live video billion pixel device for use on a server, comprising:
the array video acquisition module is used for acquiring all camera videos shot by the array camera, wherein each path of camera video comprises M video streams with different resolutions;
the canvas fusion module is used for fusing video streams with the same resolution in all camera videos into M layers of canvases with different resolutions;
the response module is used for providing the relevant information of the M layers of canvases with different resolutions to the client after receiving the live broadcast request of the client so that the client can determine the canvases with target resolutions corresponding to the selection area and the display resolution in the M layers of canvases with different resolutions according to the selection area and the display resolution, and pull the video streams corresponding to the canvases with target resolutions and the selection area; and splicing and fusing the pulled video stream according to the position in the selection area, and rendering the video stream to a display device after cutting.
8. A live device of billion pixel video, for application to a client, comprising:
The request module is used for sending a live broadcast request to the server and acquiring related information of M layers of canvases with different resolutions;
the canvas selection module is used for determining a canvas with target resolution corresponding to the selection area and the display resolution from the canvases with different resolutions of the M layers according to the selection area and the display resolution by the client;
a target video stream determining module, configured to determine multiple paths of camera videos corresponding to the selection area, and a target video stream corresponding to the canvas of the target resolution in the multiple paths of camera videos;
the pulling module is used for pulling the target video stream;
and the rendering module is used for splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-6.
10. A computer device comprising a processor, a memory and a computer program stored on the memory, characterized in that the processor implements the steps of the method according to any of claims 1-6 when the computer program is executed.
CN202111149485.7A 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video Active CN113891112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111149485.7A CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111149485.7A CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Publications (2)

Publication Number Publication Date
CN113891112A CN113891112A (en) 2022-01-04
CN113891112B true CN113891112B (en) 2023-12-05

Family

ID=79007953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111149485.7A Active CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Country Status (1)

Country Link
CN (1) CN113891112B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749857A (en) * 2017-11-01 2018-03-02 深圳市普天宜通技术股份有限公司 Method, storage medium and client a kind of while that check multi-channel video
US10341605B1 (en) * 2016-04-07 2019-07-02 WatchGuard, Inc. Systems and methods for multiple-resolution storage of media streams
CN110460871A (en) * 2019-08-29 2019-11-15 香港乐蜜有限公司 Generation method, device, system and the equipment of live video
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN111601151A (en) * 2020-04-13 2020-08-28 北京拙河科技有限公司 Method, device, medium and equipment for reviewing hundred million-level pixel video
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10313417B2 (en) * 2016-04-18 2019-06-04 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN112262570B (en) * 2018-06-12 2023-11-14 E·克里奥斯·夏皮拉 Method and computer system for automatically modifying high resolution video data in real time

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10341605B1 (en) * 2016-04-07 2019-07-02 WatchGuard, Inc. Systems and methods for multiple-resolution storage of media streams
CN107749857A (en) * 2017-11-01 2018-03-02 深圳市普天宜通技术股份有限公司 Method, storage medium and client a kind of while that check multi-channel video
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN110460871A (en) * 2019-08-29 2019-11-15 香港乐蜜有限公司 Generation method, device, system and the equipment of live video
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium
CN111601151A (en) * 2020-04-13 2020-08-28 北京拙河科技有限公司 Method, device, medium and equipment for reviewing hundred million-level pixel video
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113891112A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN107534789B (en) Image synchronization device and image synchronization method
CN111193937B (en) Live video data processing method, device, equipment and medium
CN101778257B (en) Generation method of video abstract fragments for digital video on demand
US11694303B2 (en) Method and apparatus for providing 360 stitching workflow and parameter
US11539983B2 (en) Virtual reality video transmission method, client device and server
CN113099245B (en) Panoramic video live broadcast method, system and computer readable storage medium
CN109698949B (en) Video processing method, device and system based on virtual reality scene
CN107592549B (en) Panoramic video playing and photographing system based on two-way communication
KR101964126B1 (en) The Apparatus And Method For Transferring High Definition Video
CN111225228B (en) Video live broadcast method, device, equipment and medium
CN112468832A (en) Billion-level pixel panoramic video live broadcast method, device, medium and equipment
CN111542862A (en) Method and apparatus for processing and distributing live virtual reality content
CN111800653B (en) Video decoding method, system, device and computer readable storage medium
CN111601151A (en) Method, device, medium and equipment for reviewing hundred million-level pixel video
CN113891111B (en) Live broadcasting method, device, medium and equipment of billion pixel video
CN114007059A (en) Video compression method, decompression method, device, electronic equipment and storage medium
CN107707830B (en) Panoramic video playing and photographing system based on one-way communication
JP2017123503A (en) Video distribution apparatus, video distribution method and computer program
US20220053127A1 (en) Image Processing Method, Apparatus and System, Network Device, Terminal and Storage Medium
US20200213631A1 (en) Transmission system for multi-channel image, control method therefor, and multi-channel image playback method and apparatus
CN113891112B (en) Live broadcasting method, device, medium and equipment of billion pixel video
JP2015050572A (en) Information processing device, program, and information processing method
CN114513702B (en) Web-based block panoramic video processing method, system and storage medium
WO2023029252A1 (en) Multi-viewpoint video data processing method, device, and storage medium
US11706375B2 (en) Apparatus and system for virtual camera configuration and selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant