CN113286149B - Cloud conference self-adaptive multi-layer video coding method, system and storage medium - Google Patents

Cloud conference self-adaptive multi-layer video coding method, system and storage medium Download PDF

Info

Publication number
CN113286149B
CN113286149B CN202110827124.7A CN202110827124A CN113286149B CN 113286149 B CN113286149 B CN 113286149B CN 202110827124 A CN202110827124 A CN 202110827124A CN 113286149 B CN113286149 B CN 113286149B
Authority
CN
China
Prior art keywords
video
resolution
list
normalized
watching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110827124.7A
Other languages
Chinese (zh)
Other versions
CN113286149A (en
Inventor
马华文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
G Net Cloud Service Co Ltd
Original Assignee
G Net Cloud Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by G Net Cloud Service Co Ltd filed Critical G Net Cloud Service Co Ltd
Priority to CN202110827124.7A priority Critical patent/CN113286149B/en
Publication of CN113286149A publication Critical patent/CN113286149A/en
Application granted granted Critical
Publication of CN113286149B publication Critical patent/CN113286149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Abstract

The invention discloses a cloud conference self-adaptive multilayer video coding method, a system and a storage medium, wherein the method comprises the following steps: presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions; receiving a video resolution list about a video watching end in real time; carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list; respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders; and respectively carrying out coding processing on the shared video based on the started video coder to obtain video data with different video resolutions, and sending the video data to the corresponding video watching end for displaying. The invention effectively solves the problems of CPU resource waste, redundant uplink bandwidth and the like, thereby improving the video coding performance and stability of the cloud conference middle-end equipment.

Description

Cloud conference self-adaptive multi-layer video coding method, system and storage medium
Technical Field
The invention relates to the technical field of video cloud conferences, in particular to a cloud conference self-adaptive multi-layer video coding method, a cloud conference self-adaptive multi-layer video coding system and a storage medium.
Background
With the current social development, the demand of video cloud conference is more and more, and the use users are more and more extensive. Both traditional video conferences and current popular cloud videos have increasingly high demands on universality and compatibility of computers. In order to meet the requirements of meeting participants in a meeting, a video sharing segment can be met only by carrying out multi-layer video coding with more than two layers, and based on the multi-layer video coding, a plurality of video multi-layer coding schemes are provided.
At present, the existing cloud conference multi-layer video coding scheme has two general directions: one solution is to use standard SVC multi-layer video coding, which can produce multiple layers of different resolution video at one coding pass. Another solution is to use multiple encoders operating simultaneously, each encoder encoding video at one resolution, so that multiple encoders can produce video at multiple different resolutions. The two schemes can meet the requirement that the watching end needs videos with different resolutions in the cloud conference.
The former two schemes have the advantage that once coding is carried out, video data with various different resolutions can be coded, and the performance loss of the client device is minimum. However, the multi-layer code stream has space-domain coupling, and the SVC coding is required to be used, so that a specific decoder is required for decoding, the scalability is not good, and GPU coding of a mobile terminal is not supported. Therefore, MCU (video format conversion process) is needed to consume more cloud resources when multiple end devices enter into a meeting. The latter is to adopt a plurality of encoders to run in series for encoding, because each layer of encoder code stream is independent and has no coupling, the GPU encoding is easy to expand and supports the mobile terminal, all terminal devices can seamlessly carry out video intercommunication without MCU processing in a meeting, but the performance loss of the scheme to client terminal devices is large.
Disclosure of Invention
In view of the foregoing problems, an object of the present invention is to provide a cloud conference adaptive multi-layer video coding method, system and storage medium, which can perform adaptive multi-layer video coding according to the video resolution change condition of each participant, and effectively reduce the performance loss of a video sharing device while achieving scalability of multi-layer video coding.
The invention provides a cloud conference self-adaptive multilayer video coding method in a first aspect, which comprises the following steps:
presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions;
receiving a video resolution list about a video watching end in real time;
carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
and respectively carrying out coding processing on the shared video based on the started video coder to obtain video data with different video resolutions, and sending the video data to the corresponding video watching end for displaying.
In this scheme, receiving a video resolution list about a video viewing end in real time specifically includes:
in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list for the video sharing end.
In this scheme, each video viewing terminal sends a video resolution request message videoResReq to the video server based on its own video resolution requirement, and the method specifically includes:
when a video watching end A accesses a meeting, the video watching end A sends a video resolution request message VideoResReq to a video server according to the video resolution required by the default display layout; and/or
When a video watching end B entering a conference changes the display layout of a currently watched video in the cloud conference, if the video resolution required in the new layout is changed relative to the video resolution in the old layout, a video resolution request message VideoResReq is sent to a video server based on the video resolution required in the new layout; and/or
When a plurality of videos exist in a cloud conference by a video watching end C who has entered the conference and are laid out in a page and cannot be displayed, performing page display processing on the plurality of videos; and prompting the displayed video resolution to change according to the page turning action of the video viewer, and sending a video resolution request message VideoResReq to the video server based on the displayed video after page turning.
In this scheme, receiving a video resolution list about a video viewing end in real time specifically includes:
when the video sharing end does not receive a video resolution list UpdateVideoList actively synchronized by the video server over a preset time period, actively sending a video resolution list request message to the video server;
returning, by the video server, a latest video resolution list UpdateVideoList to the video sharing end based on the video resolution list request message;
comparing the latest video resolution list UpdateVideoList with the last received video resolution list UpdateVideoList by the video sharing end, and if the latest video resolution list UpdateVideoList is consistent with the last received video resolution list UpdateVideoList, not changing the state of the multilayer video encoder; and if the video resolution lists are inconsistent, normalizing the UpdateVideoList according to a normalization rule to obtain a latest normalized resolution list, and updating the state of the multi-layer video encoder based on the latest normalized resolution list.
In this scheme, the normalizing the video resolution list according to the normalization rule specifically includes:
presetting a corresponding relation table between actual video resolutions and normalized resolutions, wherein a plurality of normalized resolutions in the corresponding relation table are respectively in one-to-one correspondence with a multilayer video encoder, and the corresponding relation table supports that a single actual video resolution corresponds to a single normalized resolution and a plurality of different actual video resolutions correspond to a single normalized resolution;
converting each actual video resolution in the video resolution list into corresponding normalized resolution according to the corresponding relation table;
traversing the video resolution list, and counting the conversion use times of each normalized resolution in the corresponding relation table;
when the conversion use times of a certain normalized resolution in the corresponding relation table is equal to 0, setting a video encoder corresponding to the normalized resolution to be in a closed state; and when the conversion use times of a certain normalized resolution in the corresponding relation table is more than 0, setting the video encoder corresponding to the normalized resolution to be in an open state, and obtaining the on-off state of the multilayer video encoder in this way.
In this scheme, before the shared video is respectively encoded by the video encoders that are turned on, the method further includes:
judging whether the aspect ratio of the shared video resolution accords with a preset ratio or not;
if not, clipping the shared original video, resampling to the size of the video resolution which can be coded by the opened maximum layer video coder, and delivering the clipped video to each opened video coder for coding with different video resolutions; and if so, directly delivering the shared original video to each started video encoder to perform encoding processing with different video resolutions.
In this scheme, tailor the processing to the original video of sharing, specifically include:
acquiring the width and height of a YUV resolution of a shared original video and the width and height of a reference video resolution, wherein the width and height of the reference video resolution are the width and height of a video resolution which can be coded by an open maximum layer video coder;
calculating an aspect ratio srCratio of YUV resolution of an original video and an aspect ratio dstRatio of YUV resolution of a reference video;
judging whether the difference srCratio-dstRatio is greater than or equal to-0.000001 and less than or equal to 0.000001; if so, the width and the height of the re-sampling target video resolution are respectively equal to the width and the height of the reference video resolution;
if not, then
Judging whether srCratio is larger than dstRatio or not, if so, performing left-right clipping on the shared original video YUV, wherein the width of the video resolution of the resampling purpose is = (the height of the reference video is 16/9), and the height of the video resolution of the resampling purpose is equal to the height of the reference video resolution; if not, then
Clipping the shared original video YUV up and down, wherein the height of the re-sampling target video resolution is = (the width of the reference video resolution is 9/16), and the width of the re-sampling target video resolution is equal to the width of the reference video resolution;
judging whether resampling initialization is carried out or not, if not, setting resampling parameters and initializing resampling handles, and if so, initializing resampling handles
And cutting, scaling and resampling the original video YUV to obtain the video data to be coded required by multilayer coding.
The second aspect of the present invention further provides a cloud conference adaptive multi-layer video coding system, which includes a memory and a processor, where the memory includes a cloud conference adaptive multi-layer video coding method program, and when executed by the processor, the cloud conference adaptive multi-layer video coding method program implements the following steps:
presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions;
receiving a video resolution list about a video watching end in real time;
carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
and respectively carrying out coding processing on the shared video based on the started video coder to obtain video data with different video resolutions, and sending the video data to the corresponding video watching end for displaying.
In this scheme, receiving a video resolution list about a video viewing end in real time specifically includes:
in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list for the video sharing end.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a cloud conference adaptive multi-layer video coding method program, and when the cloud conference adaptive multi-layer video coding method program is executed by a processor, the steps of the cloud conference adaptive multi-layer video coding method described above are implemented.
Aiming at a video resolution list required by a video watching end in a cloud conference, a video sharing end carries out long connection message communication with a video server to obtain a latest video resolution list of the video watching end, meanwhile, self-adaptive multilayer video coding parameters are set according to the latest video resolution list, multilayer video coding is carried out on video sharing end equipment according to the requirement of the video watching end in the cloud conference, and a video coder of a certain layer in the multilayer is opened and closed. Therefore, the CPU and the network bandwidth of the equipment can be reasonably used, the waste of CPU resources is reduced, the running performance and the uplink bandwidth of the equipment at the sharing end are ensured, and the stability problem caused by the fact that the video resolution of the watching end continuously changes and resets the multilayer encoder is also avoided. Therefore, reasonable use and stability of resource performance of the video sharing end in the cloud conference are effectively guaranteed, and user use experience is further enhanced.
Drawings
Fig. 1 shows a flow chart of a cloud conference adaptive multi-layer video coding method of the present invention;
FIG. 2 shows a timing diagram of the present invention when the video resolution requirement of the video viewer changes;
FIG. 3 is a flow chart illustrating a process of the present invention for deriving multi-layer video encoder switch states based on normalized resolution;
FIG. 4 is a flow diagram illustrating the process of the present invention for cropping resampling prior to encoding the original video;
FIG. 5 is a flow chart illustrating a process for parallel encoding of multi-layered video using a multi-thread encoder in accordance with the present invention;
fig. 6 shows a block diagram of a cloud conference adaptive multi-layer video coding system according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Fig. 1 shows a flowchart of a cloud conference adaptive multi-layer video coding method according to the present invention.
As shown in fig. 1, a first aspect of the present invention provides a cloud conference adaptive multi-layer video coding method, including the following steps:
s102, presetting a plurality of layers of video encoders, wherein each video encoder can encode video data with different video resolutions respectively;
s104, receiving a video resolution list related to a video watching end in real time;
s106, carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
s108, respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
and S110, respectively carrying out coding processing on the shared video based on the started video coder to obtain video data with different video resolutions, and sending the video data to the corresponding video watching end for displaying.
It should be noted that before the above steps are performed, a video cloud conference system needs to be set up, where the video cloud conference system may include a video sharing end, a video server and a video watching end, a user usually sends a cloud conference creation request to the video server through the video sharing end, the video server generates a corresponding cloud conference room based on the creation request, and feeds back a conference number, conference start time and a network link of the cloud conference room to the user, and then notifies other participants, and when the conference start time is close to, the user and each participant respectively input the conference number through their own client or directly click the network link to enter the cloud conference room, and later the user shares videos with different resolutions to each video watching end through the video sharing end to meet the requirements of the resolutions of the displays of each video watching end, and each participant can watch the optimal video effect based on the video watching end of the participant.
Specifically, when a new participant enters a meeting in the cloud conference, the participants change the display layout and other operations, a video resolution request message of a certain participant is sent to the video server, and reliable two-way communication is performed; then, the participants who share the video in the cloud conference receive the video resolution list sent by the video server and carry out normalization processing on the layout resolution; the video sharing end in the cloud conference carries out multilayer video coding and multithreading coding processing, a minimum layer encoder is started by default, and meanwhile multilayer video coding is required to be carried out, and packaging processing is carried out after alignment; and finally, the video sharing end in the cloud conference performs opening and closing operations of the multilayer video encoder according to the video resolution list synchronously updated in real time.
It should be noted that, the video sharing end synchronizes the video resolution list of the video watching end from the video server according to the conference ID, sets the multi-layer video coding mask required by the current video sharing end through the external video interface, and the multi-layer video coding main thread controls a certain layer of video to be coded and closes other layers of video coding, so as to ensure that the resolution videos required by all the video watching ends in the current cloud conference are coded, and the resolution videos not required are stopped from being coded. Therefore, the performance and stability of the video sharing end equipment in the cloud conference are effectively guaranteed.
According to the embodiment of the invention, the receiving of the video resolution list of the video watching end in real time specifically comprises the following steps:
in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list for the video sharing end.
According to the embodiment of the invention, each video watching end sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of the video watching end, which specifically comprises the following steps:
when a video watching end A accesses a meeting, the video watching end A sends a video resolution request message VideoResReq to a video server according to the video resolution required by the default display layout; and/or
When a video watching end B entering a conference changes the display layout of a currently watched video in the cloud conference, if the video resolution required in the new layout is changed relative to the video resolution in the old layout, a video resolution request message VideoResReq is sent to a video server based on the video resolution required in the new layout; and/or
When a plurality of videos exist in a cloud conference by a video watching end C who has entered the conference and are laid out in a page and cannot be displayed, performing page display processing on the plurality of videos; and prompting the displayed video resolution to change according to the page turning action of the video viewer, and sending a video resolution request message VideoResReq to the video server based on the displayed video after page turning.
Fig. 2 shows a processing timing diagram of the present invention when the video resolution requirement of the video viewing end changes.
As shown in fig. 2, 2.1 when the video viewer a enters the session, a Request message videoreq of video resolution is sent to the video server F according to the video resolution required by the default display layout.
2.2 after receiving the video resolution request of the video watching end a, the video server F replies a message videores of the video watching end a, and simultaneously, the video server comprehensively stages the video resolution information of all the watching video sharing ends D, and synchronizes all the watching information to the video sharing ends D through a Notify message UpdateVideoList for processing.
2.3 after receiving the UpdateVideoList, the video sharing end D normalizes the video resolutions of all the video watching ends, generates a new on-off state of the multilayer video encoder, and sends out new video multilayer data according to the video resolution list of the video watching ends.
2.4 when the video viewer B in the conference changes the layout of the currently viewed video in the cloud conference, if the resolution of the video required in the new layout changes relative to the resolution of the video in the old layout, a VideoResReq message needs to be sent to request the video sharing peer for the video data with the new resolution, and the processing method is as shown in the above steps 2.1 to 2.3.
2.5 when the video watching terminal C of the conference has a plurality of videos in the cloud conference and cannot display the videos in a layout page, performing paging display processing. At this time, the participant turns pages through the video watching terminal C to display the video, so as to prompt the request of some video resolutions to be closed, and at the same time, the request of some video resolutions to be opened, that is, to send the VideoResReq to the video server, the processing method is as shown in the above steps 2.1 to 2.3.
According to the embodiment of the present invention, receiving a video resolution list about a video viewing end in real time specifically includes:
when the video sharing end does not receive a video resolution list UpdateVideoList actively synchronized by the video server over a preset time period, actively sending a video resolution list request message to the video server;
returning, by the video server, a latest video resolution list UpdateVideoList to the video sharing end based on the video resolution list request message;
comparing the latest video resolution list UpdateVideoList with the last received video resolution list UpdateVideoList by the video sharing end, and if the latest video resolution list UpdateVideoList is consistent with the last received video resolution list UpdateVideoList, not changing the state of the multilayer video encoder; and if the video resolution lists are inconsistent, normalizing the UpdateVideoList according to a normalization rule to obtain a latest normalized resolution list, and updating the state of the multi-layer video encoder based on the latest normalized resolution list.
It should be noted that, in order to ensure real-time synchronization of the video resolution lists, when the video sharing end does not receive the video resolution list pushed by the video server over a preset time period, the video sharing end actively sends a GetVideoList request message to the video server to actively acquire the latest video resolution list, as shown in fig. 2. Preferably, the value of the preset time period ranges from 3min to 10min, but is not limited thereto.
It should be noted that the video viewing terminal a/B/C is in bidirectional communication with the video server F, the video server F is in bidirectional communication with the video sharing terminal D, and the message protocol adopted is shown in table 1 below.
Table 1:
Figure DEST_PATH_IMAGE001
message Header: the message checks the header information using a 0x7DA69 JRF.
Message Length: the message body length.
Message ID: message service type, such as video resolution request message VideoRes.
Message Type: the message body types are divided into three types: request, Response, Notify, where Notify message is a notification type and may not require a reply.
Message Sequence, Message Sequence number.
Conference ID: and the conference ID indicates that the conference is a service message of a certain conference and is globally unique.
User ID: and the user ID, namely the unique ID in the conference of the participants who enter the conference, assigns the service message to a certain participant.
Message Body: the message content is recorded in a character type.
According to the embodiment of the present invention, the normalizing the video resolution list according to the normalization rule specifically includes:
presetting a corresponding relation table between actual video resolutions and normalized resolutions, wherein a plurality of normalized resolutions in the corresponding relation table are respectively in one-to-one correspondence with a multilayer video encoder, and the corresponding relation table supports that a single actual video resolution corresponds to a single normalized resolution and a plurality of different actual video resolutions correspond to a single normalized resolution;
converting each actual video resolution in the video resolution list into corresponding normalized resolution according to the corresponding relation table;
traversing the video resolution list, and counting the conversion use times of each normalized resolution in the corresponding relation table;
when the conversion use times of a certain normalized resolution in the corresponding relation table is equal to 0, setting a video encoder corresponding to the normalized resolution to be in a closed state; and when the conversion use times of a certain normalized resolution in the corresponding relation table is more than 0, setting the video encoder corresponding to the normalized resolution to be in an open state, and obtaining the on-off state of the multilayer video encoder in this way.
The correspondence table can be set with specific reference to table 2 below.
Table 2:
Figure 282635DEST_PATH_IMAGE002
as shown in fig. 3, after the video sharing end obtains the video resolution list, the video resolution VideoResNum in the video resolution list is obtained, the conversion is performed according to the corresponding relationship between the actual video resolution and the normalized resolution in the table 2, and the normalized resolution list is established according to the converted normalized resolution.
Traversing a video resolution list of a video watching end, counting the conversion use times of the normalized resolution in the corresponding relation table, and setting the video encoder of the layer to be closed when the use times of a resolution of Chinese angelica is equal to 0; and otherwise, setting the video encoder of the layer to be opened, thereby obtaining the on-off state of the multi-layer video encoder.
The method comprises the steps of firstly carrying out parameter validity detection in setting the states of the multilayer video encoders, judging and detecting the states of each layer of video encoders if the parameters are legal, setting an IDR frame as a first frame of the layer of encoding when the states of the video encoders are switched from closed to opened, namely when the video encoders are restarted, setting IDR frame marks first, then setting the states of the layer of video encoders, and directly assigning the states of the layer of video encoders when the video encoders at a certain layer are not switched from closed to opened.
According to an embodiment of the present invention, before the shared video is respectively encoded by the on-based video encoder, the method further includes:
judging whether the aspect ratio of the shared video resolution accords with a preset ratio or not;
if not, clipping the shared original video, resampling to the size of the video resolution which can be coded by the opened maximum layer video coder, and delivering the clipped video to each opened video coder for coding with different video resolutions; and if so, directly delivering the shared original video to each started video encoder to perform encoding processing with different video resolutions.
It should be noted that the multi-layer video multi-thread coding implementation specifically includes two steps:
firstly, clipping and resampling processing before encoding is required to be carried out on actual video YUV data, an aspect ratio of 16:9 is used by default, and when the aspect ratio of the video to be encoded does not meet the requirement, clipping and resampling are required to be carried out to the first layer, namely the video resolution size of the maximum layer encoding.
As shown in fig. 4, the clipping processing performed on the shared original video specifically includes:
acquiring the width and height of a YUV resolution of a shared original video and the width and height of a reference video resolution, wherein the width and height of the reference video resolution are the width and height of a video resolution which can be coded by an open maximum layer video coder;
calculating an aspect ratio srCratio of YUV resolution of an original video and an aspect ratio dstRatio of YUV resolution of a reference video;
judging whether the difference srCratio-dstRatio is greater than or equal to-0.000001 and less than or equal to 0.000001; if so, the width and the height of the re-sampling target video resolution are respectively equal to the width and the height of the reference video resolution;
if not, then
Judging whether srCratio is larger than dstRatio or not, if so, performing left-right clipping on the shared original video YUV, wherein the width of the video resolution of the resampling purpose is = (the height of the reference video is 16/9), and the height of the video resolution of the resampling purpose is equal to the height of the reference video resolution; if not, then
Clipping the shared original video YUV up and down, wherein the height of the re-sampling target video resolution is = (the width of the reference video resolution is 9/16), and the width of the re-sampling target video resolution is equal to the width of the reference video resolution;
judging whether resampling initialization is carried out or not, if not, setting resampling parameters and initializing resampling handles, and if so, initializing resampling handles
And cutting, scaling and resampling the original video YUV to obtain the video data to be coded required by multilayer coding.
Secondly, a multi-thread encoder is adopted to carry out simultaneous encoding processing on the multi-layer video. The specific processing flow is shown in fig. 5.
According to the embodiment of the invention, the process flow of the multi-layer video parallel coding by adopting the multi-thread coder is as follows:
5.1 creating a multilayer encoder, setting the number of encoding layers, defaulting to three-layer encoding, setting and storing the resolution, frame rate and code rate parameters of each layer of encoding parameters of an application layer, and only starting a minimum layer encoder switch by default to initialize a packing PTS.
5.2 circulation three-layer coding (LayerNum), initializing the handle of each layer of coder, setting the parameters of each layer of coder, such as the necessary parameters of coding width/height, coding rate, GOP value, frame rate and the like, setting the optimization parameters of part of video coder, such as the parameters of QP value, code rate control method and the like, and starting the thread of the coder after setting the parameters of the coder.
5.3 in the first step of starting the encoder thread, the encoder thread is created, the encoder handle is transmitted to the corresponding encoder thread, and the global parameter, the thread semaphore and the mutual exclusion lock of the encoder thread are set.
5.4 at the start of the encoder thread function, thread global parameter initialization and RunLag are performed, and default H264 encoding parameters are initialized.
5.5, setting the basic parameters and the advanced parameters of encoding optimization of the current layer, calling an H264 encoding Open interface, opening an encoder and saving an encoder handle to enable RunLag = 1.
5.6 judging whether the thread semaphore exists, if the encoder semaphore is set at the outer layer, exiting the encoder thread function.
And 5.7, if the thread semaphore does not exist, continuously checking whether data exist in the queue to be coded of the current layer, and if the data do not exist, carrying out Sleep delivery CPU use right processing.
5.8 if detecting that there is data in the queue to be coded of the current layer, performing video coding processing of the current layer, wherein the first step and core of the coding processing are that the PTS of the current layer needs to be accumulated, the PTS is used as a core parameter required by a coder for sequentially performing time domain processing, and data synchronization after multi-layer coding is performed in most main threads.
5.9 checking whether the switch state of the encoder starts, if not, directly exiting, if the encoder is in the switch state, judging whether the encoder state is switched from off to on, namely when the video encoder is restarted, setting the first frame of the layer encoding as an IDR frame, namely setting an IDR frame mark firstly;
5.10 setting and constructing H264 coding parameters, checking and judging whether the IDR frame Flag is opened, if so, setting the frame type required by the current encoder to be the IDR frame, and resetting the IDR frame Flag to be 0.
5.11 preparing the coding parameters, using an H264 coding encode interface to carry out coding processing, and when the return value BufferSize of the coding interface is less than or equal to 0, exiting the data which is not coded out due to coding failure.
5.12 when the coding interface return value BufferSize is larger than 0, indicating that the coding is successful to code the appointed frame data, and putting the coded data into a packing queue.
5.13 when the main thread executes the multi-layer encoder, initializing and resetting each layer of temporary parameters, and performing resampling traversal on each layer of encoded YUV data according to LayerNum.
5.14 when LayerNum is the first layer, the YUV data to be coded is the maximum layer coding width and height, and no resampling processing is needed.
5.15 when LayerNum is not the first layer, resampling processing is needed according to the coding width and height of the current layer to obtain YUV data to be coded in the layer.
And 5.16, putting the processed YUV data into a coding queue corresponding to LayerNum, and executing the processing of the steps 5.6 to 5.12 in the encoder threads of the respective layers to perform multi-line parallel coding.
5.17 checking the encoded packing queue, and starting packing processing after the packing queue needs to buffer 100ms of data, wherein the 100ms of data calculates the number of buffer frames according to the frame rate, and exits if the buffer is not enough.
5.18 the packing queue buffers enough data, performs packing PTS accumulation, and then finds out the coding frames with the same PTS from the packing PTS dequeue.
5.19 carry on the packing of the multi-layer video to the encoding frame that the multi-layer PTS is the same, push out the multi-layer data packed finally, send to the video server F, forward to the watching end A/B/C, as shown in figure 2.
And 5.20 after video packaging processing, cleaning a queue to be packaged, kicking out and releasing all coded frames with PTS smaller than the current packaged PTS in the queue.
According to the invention, the video sharing end controls the multilayer video coding according to the requirement of the watching end on the video resolution in the conference, the multilayer video coding according to the requirement is effectively achieved, the video sharing end equipment resource and the video server uplink bandwidth are saved, and meanwhile, the instability caused by resetting the multilayer coder due to the continuous change of the video sharing end resolution is solved, so that the use experience of the cloud conference end equipment of a user is improved.
Fig. 6 shows a block diagram of a cloud conference adaptive multi-layer video coding system according to the present invention.
As shown in fig. 6, the second aspect of the present invention further provides a cloud conference adaptive multi-layer video coding system 6, which includes a memory 61 and a processor 62, where the memory includes a cloud conference adaptive multi-layer video coding method program, and when executed by the processor, the cloud conference adaptive multi-layer video coding method program implements the following steps:
presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions;
receiving a video resolution list about a video watching end in real time;
carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
and respectively carrying out coding processing on the shared video based on the started video coder to obtain video data with different video resolutions, and sending the video data to the corresponding video watching end for displaying.
According to the embodiment of the invention, the receiving of the video resolution list of the video watching end in real time specifically comprises the following steps:
in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list for the video sharing end.
According to the embodiment of the invention, each video watching end sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of the video watching end, which specifically comprises the following steps:
when a video watching end A accesses a meeting, the video watching end A sends a video resolution request message VideoResReq to a video server according to the video resolution required by the default display layout; and/or
When a video watching end B entering a conference changes the display layout of a currently watched video in the cloud conference, if the video resolution required in the new layout is changed relative to the video resolution in the old layout, a video resolution request message VideoResReq is sent to a video server based on the video resolution required in the new layout; and/or
When a plurality of videos exist in a cloud conference by a video watching end C who has entered the conference and are laid out in a page and cannot be displayed, performing page display processing on the plurality of videos; and prompting the displayed video resolution to change according to the page turning action of the video viewer, and sending a video resolution request message VideoResReq to the video server based on the displayed video after page turning.
According to the embodiment of the present invention, receiving a video resolution list about a video viewing end in real time specifically includes:
when the video sharing end does not receive a video resolution list UpdateVideoList actively synchronized by the video server over a preset time period, actively sending a video resolution list request message to the video server;
returning, by the video server, a latest video resolution list UpdateVideoList to the video sharing end based on the video resolution list request message;
comparing the latest video resolution list UpdateVideoList with the last received video resolution list UpdateVideoList by the video sharing end, and if the latest video resolution list UpdateVideoList is consistent with the last received video resolution list UpdateVideoList, not changing the state of the multilayer video encoder; and if the video resolution lists are inconsistent, normalizing the UpdateVideoList according to a normalization rule to obtain a latest normalized resolution list, and updating the state of the multi-layer video encoder based on the latest normalized resolution list.
According to the embodiment of the present invention, the normalizing the video resolution list according to the normalization rule specifically includes:
presetting a corresponding relation table between actual video resolutions and normalized resolutions, wherein a plurality of normalized resolutions in the corresponding relation table are respectively in one-to-one correspondence with a multilayer video encoder, and the corresponding relation table supports that a single actual video resolution corresponds to a single normalized resolution and a plurality of different actual video resolutions correspond to a single normalized resolution;
converting each actual video resolution in the video resolution list into corresponding normalized resolution according to the corresponding relation table;
traversing the video resolution list, and counting the conversion use times of each normalized resolution in the corresponding relation table;
when the conversion use times of a certain normalized resolution in the corresponding relation table is equal to 0, setting a video encoder corresponding to the normalized resolution to be in a closed state; and when the conversion use times of a certain normalized resolution in the corresponding relation table is more than 0, setting the video encoder corresponding to the normalized resolution to be in an open state, and obtaining the on-off state of the multilayer video encoder in this way.
According to an embodiment of the present invention, when executed by the processor, the cloud conference adaptive multi-layer video coding method further includes:
judging whether the aspect ratio of the shared video resolution accords with a preset ratio or not;
if not, clipping the shared original video, resampling to the size of the video resolution which can be coded by the opened maximum layer video coder, and delivering the clipped video to each opened video coder for coding with different video resolutions; and if so, directly delivering the shared original video to each started video encoder to perform encoding processing with different video resolutions.
According to the embodiment of the invention, clipping processing is performed on the shared original video, which specifically comprises the following steps:
acquiring the width and height of a YUV resolution of a shared original video and the width and height of a reference video resolution, wherein the width and height of the reference video resolution are the width and height of a video resolution which can be coded by an open maximum layer video coder;
calculating an aspect ratio srCratio of YUV resolution of an original video and an aspect ratio dstRatio of YUV resolution of a reference video;
judging whether the difference srCratio-dstRatio is greater than or equal to-0.000001 and less than or equal to 0.000001; if so, the width and the height of the re-sampling target video resolution are respectively equal to the width and the height of the reference video resolution;
if not, then
Judging whether srCratio is larger than dstRatio or not, if so, performing left-right clipping on the shared original video YUV, wherein the width of the video resolution of the resampling purpose is = (the height of the reference video is 16/9), and the height of the video resolution of the resampling purpose is equal to the height of the reference video resolution; if not, then
Clipping the shared original video YUV up and down, wherein the height of the re-sampling target video resolution is = (the width of the reference video resolution is 9/16), and the width of the re-sampling target video resolution is equal to the width of the reference video resolution;
judging whether resampling initialization is carried out or not, if not, setting resampling parameters and initializing resampling handles, and if so, initializing resampling handles
And cutting, scaling and resampling the original video YUV to obtain the video data to be coded required by multilayer coding.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a cloud conference adaptive multi-layer video coding method program, and when the cloud conference adaptive multi-layer video coding method program is executed by a processor, the steps of the cloud conference adaptive multi-layer video coding method described above are implemented.
Aiming at a video resolution list required by a video watching end in a cloud conference, a video sharing end carries out long connection message communication with a video server to obtain a latest video resolution list of the video watching end, meanwhile, self-adaptive multilayer video coding parameters are set according to the latest video resolution list, multilayer video coding is carried out on video sharing end equipment according to the requirement of the video watching end in the cloud conference, and a video coder of a certain layer in the multilayer is opened and closed. Therefore, the CPU and the network bandwidth of the equipment can be reasonably used, the waste of CPU resources is reduced, the running performance and the uplink bandwidth of the equipment at the sharing end are ensured, and the stability problem caused by the fact that the video resolution of the watching end continuously changes and resets the multilayer encoder is also avoided. Therefore, reasonable use and stability of resource performance of the video sharing end in the cloud conference are effectively guaranteed, and user use experience is further enhanced.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.

Claims (7)

1. A cloud conference self-adaptive multi-layer video coding method is characterized by comprising the following steps:
presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions;
receiving a video resolution list about a video watching end in real time;
carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
respectively carrying out encoding processing on the shared video based on the started video encoder to obtain video data with different video resolutions, and sending the video data to a corresponding video watching end for displaying;
receiving a video resolution list related to a video watching end in real time, specifically comprising: in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list to the video sharing end;
each video watching end sends a video resolution request message VideoResReq to the video server based on the video resolution requirement of the video watching end, and the method specifically comprises the following steps: when a video watching end A accesses a meeting, the video watching end A sends a video resolution request message VideoResReq to a video server according to the video resolution required by the default display layout; and/or when the video watching end B entering the conference changes the display layout of the currently watched video in the cloud conference, if the video resolution required in the new layout changes relative to the video resolution in the old layout, sending a video resolution request message VideoResReq to the video server based on the video resolution required in the new layout; and/or when a plurality of videos exist in the video watching end C which has entered the conference in the cloud conference and are laid out in a page and cannot be displayed, performing page display processing on the plurality of videos; and prompting the displayed video resolution to change according to the page turning action of the video viewer, and sending a video resolution request message VideoResReq to the video server based on the displayed video after page turning.
2. The method according to claim 1, wherein receiving a video resolution list about a video viewer in real time further comprises: when the video sharing end does not receive a video resolution list UpdateVideoList actively synchronized by the video server over a preset time period, actively sending a video resolution list request message to the video server;
returning, by the video server, a latest video resolution list UpdateVideoList to the video sharing end based on the video resolution list request message;
comparing the latest video resolution list UpdateVideoList with the last received video resolution list UpdateVideoList by the video sharing end, and if the latest video resolution list UpdateVideoList is consistent with the last received video resolution list UpdateVideoList, not changing the state of the multilayer video encoder;
and if the video resolution lists are inconsistent, normalizing the UpdateVideoList according to a normalization rule to obtain a latest normalized resolution list, and updating the state of the multi-layer video encoder based on the latest normalized resolution list.
3. The method according to claim 1, wherein the normalizing the video resolution list according to a normalization rule specifically comprises: presetting a corresponding relation table between actual video resolutions and normalized resolutions, wherein a plurality of normalized resolutions in the corresponding relation table are respectively in one-to-one correspondence with a multilayer video encoder, and the corresponding relation table supports that a single actual video resolution corresponds to a single normalized resolution and a plurality of different actual video resolutions correspond to a single normalized resolution;
converting each actual video resolution in the video resolution list into corresponding normalized resolution according to the corresponding relation table;
traversing the video resolution list, and counting the conversion use times of each normalized resolution in the corresponding relation table;
when the conversion use times of a certain normalized resolution in the corresponding relation table is equal to 0, setting a video encoder corresponding to the normalized resolution to be in a closed state; and when the conversion use times of a certain normalized resolution in the corresponding relation table is more than 0, setting the video encoder corresponding to the normalized resolution to be in an open state, and obtaining the on-off state of the multilayer video encoder in this way.
4. The cloud conference adaptive multi-layer video coding method according to claim 1, wherein before the shared video is respectively encoded by the video encoders that are turned on, the method further comprises: judging whether the aspect ratio of the shared video resolution accords with a preset ratio or not;
if not, clipping the shared original video, resampling to the size of the video resolution which can be coded by the opened maximum layer video coder, and delivering the clipped video to each opened video coder for coding with different video resolutions; and if so, directly delivering the shared original video to each started video encoder to perform encoding processing with different video resolutions.
5. The cloud conference adaptive multi-layer video coding method according to claim 4, wherein clipping processing is performed on a shared original video, and specifically comprises:
acquiring the width and height of a YUV resolution of a shared original video and the width and height of a reference video resolution, wherein the width and height of the reference video resolution are the width and height of a video resolution which can be coded by an open maximum layer video coder;
calculating an aspect ratio srCratio of YUV resolution of an original video and an aspect ratio dstRatio of YUV resolution of a reference video;
judging whether the difference srCratio-dstRatio is greater than or equal to-0.000001 and less than or equal to 0.000001; if so, the width and the height of the re-sampling target video resolution are respectively equal to the width and the height of the reference video resolution;
if not, judging whether the srCratio is larger than the dstRatio, if so, performing left-right clipping on the shared original video YUV, wherein the width of the resolution of the resampled video is = 16/9, and the resolution of the resampled video is equal to that of the reference video; if not, performing up-down clipping on the shared original video YUV, wherein the height of the resolution of the resampled target video is = the width of the resolution of the reference video 9/16, and the width of the resolution of the resampled target video is equal to the width of the resolution of the reference video;
judging whether resampling initialization is carried out or not, if not, setting resampling parameters and initializing resampling handles, and if so, initializing resampling handles
And cutting, scaling and resampling the original video YUV to obtain the video data to be coded required by multilayer coding.
6. A cloud conference self-adaptive multi-layer video coding system is characterized by comprising a memory and a processor, wherein the memory comprises a cloud conference self-adaptive multi-layer video coding method program, and the cloud conference self-adaptive multi-layer video coding method program realizes the following steps when being executed by the processor:
presetting a plurality of layers of video encoders, wherein each video encoder can respectively encode video data with different video resolutions;
receiving a video resolution list about a video watching end in real time;
carrying out normalization processing on the video resolution list according to a normalization rule to obtain a normalized resolution list;
respectively selecting a corresponding video encoder to start for each normalized resolution in the normalized resolution list, and closing other video encoders;
respectively carrying out encoding processing on the shared video based on the started video encoder to obtain video data with different video resolutions, and sending the video data to a corresponding video watching end for displaying;
receiving a video resolution list related to a video watching end in real time, specifically comprising: in the same cloud conference, each video watching terminal respectively sends a video resolution request message VideoResReq to a video server based on the video resolution requirement of each video watching terminal;
after receiving the video resolution request message VideoResReq of each video watching end, the video server respectively replies the message VideoRes to each video watching end, and meanwhile, the video server comprehensively stages the video resolution information of all the video watching ends, arranges the video resolution information into a video resolution list UpdateVideoList, and synchronously normalizes the video resolution list to the video sharing end;
each video watching end sends a video resolution request message VideoResReq to the video server based on the video resolution requirement of the video watching end, and the method specifically comprises the following steps: when a video watching end A accesses a meeting, the video watching end A sends a video resolution request message VideoResReq to a video server according to the video resolution required by the default display layout; and/or when the video watching end B entering the conference changes the display layout of the currently watched video in the cloud conference, if the video resolution required in the new layout changes relative to the video resolution in the old layout, sending a video resolution request message VideoResReq to the video server based on the video resolution required in the new layout; and/or when a plurality of videos exist in the video watching end C which has entered the conference in the cloud conference and are laid out in a page and cannot be displayed, performing page display processing on the plurality of videos; and prompting the displayed video resolution to change according to the page turning action of the video viewer, and sending a video resolution request message VideoResReq to the video server based on the displayed video after page turning.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium includes a cloud conference adaptive multi-layer video coding method program, and when the cloud conference adaptive multi-layer video coding method program is executed by a processor, the steps of a cloud conference adaptive multi-layer video coding method according to any one of claims 1 to 5 are implemented.
CN202110827124.7A 2021-07-21 2021-07-21 Cloud conference self-adaptive multi-layer video coding method, system and storage medium Active CN113286149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110827124.7A CN113286149B (en) 2021-07-21 2021-07-21 Cloud conference self-adaptive multi-layer video coding method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110827124.7A CN113286149B (en) 2021-07-21 2021-07-21 Cloud conference self-adaptive multi-layer video coding method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113286149A CN113286149A (en) 2021-08-20
CN113286149B true CN113286149B (en) 2021-09-24

Family

ID=77286885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110827124.7A Active CN113286149B (en) 2021-07-21 2021-07-21 Cloud conference self-adaptive multi-layer video coding method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113286149B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115118921B (en) * 2022-08-29 2023-01-20 全时云商务服务股份有限公司 Method and system for video screen-combining self-adaptive output in cloud conference

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100596705B1 (en) * 2004-03-04 2006-07-04 삼성전자주식회사 Method and system for video coding for video streaming service, and method and system for video decoding
US8876601B2 (en) * 2012-03-27 2014-11-04 Electronics And Telecommunications Research Institute Method and apparatus for providing a multi-screen based multi-dimension game service
US9426478B2 (en) * 2014-07-21 2016-08-23 Cisco Technology, Inc. Resolution robust video quality metric
KR101770070B1 (en) * 2016-08-16 2017-08-21 라인 가부시키가이샤 Method and system for providing video stream of video conference
CN112235606A (en) * 2020-12-11 2021-01-15 全时云商务服务股份有限公司 Multi-layer video processing method, system and readable storage medium
CN112689148B (en) * 2021-03-18 2021-06-01 全时云商务服务股份有限公司 Method, system and storage medium for peak value removal of multi-layer video transmission in cloud conference

Also Published As

Publication number Publication date
CN113286149A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN111882626B (en) Image processing method, device, server and medium
CN110557649B (en) Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
US20220239719A1 (en) Immersive viewport dependent multiparty video communication
JP2022537576A (en) Apparatus, method and computer program for video encoding and decoding
US8872895B2 (en) Real-time video coding using graphics rendering contexts
US8908006B2 (en) Method, terminal and system for caption transmission in telepresence
KR101882596B1 (en) Bitstream generation and processing methods and devices and system
JP2014529258A (en) Network streaming of encoded video data
Jansen et al. A pipeline for multiparty volumetric video conferencing: transmission of point clouds over low latency DASH
KR20220011688A (en) Immersive media content presentation and interactive 360° video communication
US20220329883A1 (en) Combining Video Streams in Composite Video Stream with Metadata
CN114073097A (en) Facilitating video streaming and processing by edge computation
KR20140126372A (en) data, multimedia and video transmission updating system
US20230033063A1 (en) Method, an apparatus and a computer program product for video conferencing
US11120615B2 (en) Dynamic rendering of low frequency objects in a virtual reality system
CN113286149B (en) Cloud conference self-adaptive multi-layer video coding method, system and storage medium
EP3657803A1 (en) Generating and displaying a video stream
WO2023071469A1 (en) Video processing method, electronic device and storage medium
Zeng et al. A new architecture of 8k vr fov video end-to-end technology
CN112565670B (en) Method for rapidly and smoothly drawing multi-layer video of cloud conference
JP2024516133A (en) Anchoring Scene Descriptions to User Environment for Streaming Immersive Media Content
CN113630575A (en) Method, system and storage medium for displaying multi-person online video conference image
CN112470481A (en) Encoder and method for encoding tile-based immersive video
US20230239453A1 (en) Method, an apparatus and a computer program product for spatial computing service session description for volumetric extended reality conversation
US20220021913A1 (en) Method and apparatus for random access of 3d(ar) media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant