CN112770077A - Video conference I frame coding method and device - Google Patents

Video conference I frame coding method and device Download PDF

Info

Publication number
CN112770077A
CN112770077A CN201911077748.0A CN201911077748A CN112770077A CN 112770077 A CN112770077 A CN 112770077A CN 201911077748 A CN201911077748 A CN 201911077748A CN 112770077 A CN112770077 A CN 112770077A
Authority
CN
China
Prior art keywords
terminal
frame
vpu
video
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911077748.0A
Other languages
Chinese (zh)
Other versions
CN112770077B (en
Inventor
许磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201911077748.0A priority Critical patent/CN112770077B/en
Publication of CN112770077A publication Critical patent/CN112770077A/en
Application granted granted Critical
Publication of CN112770077B publication Critical patent/CN112770077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Abstract

The invention provides a video conference I frame coding method, which is characterized in that a terminal which sends a video updating command and has a frequency larger than a preset threshold value is determined, if the terminal is judged not to meet a preset condition, an I frame is coded according to a reserved VPU resource and sent to the terminal, namely, the I frame coding method adopts an I frame optimization strategy, isolates a terminal which has a decoding error, carries out targeted coding, does not influence other participating terminals, solves the problem that all meeting places are influenced due to packet loss of a part of terminals in a video conference, and improves the video transmission quality of the video conference. The present disclosure also provides an I-frame encoding apparatus, a computer device, and a computer readable medium.

Description

Video conference I frame coding method and device
Technical Field
The disclosure relates to the technical field of video conferences, in particular to a video conference I frame encoding method and device.
Background
With the rapid development of IP (Internet Protocol) networks and multimedia communication technologies, video conferencing systems are widely used in modern enterprises. The conference tv system refers to a system that uses tv devices and conference tv terminals to call multiple terminals distributed in various places to a same conference through an MCU (Multipoint Control Unit).
In practical applications, more and more miniaturized conferences and self-service conferences exist, networks become more and more complex, and part of video conference systems also run on the internet. In a conventional video conference system, a plurality of terminals generally watch a broadcast source, and when the conference capabilities of the terminals and the broadcast source are the same, the MCU only forwards a video stream of the broadcast source.
When a certain terminal sends a video update command to the MCU, the MCU forwards the video update command to the broadcast source terminal, so that the broadcast source terminal encodes I frames for the terminal, and the frequent encoding of the I frames by the broadcast source terminal can cause the terminal receiving the video code stream to have frequent image blurring or shaking conditions, thereby affecting the video experience of a user. If the terminal sending the video update command has the same video capability as other terminals in the conference, the situation of frequently encoding the I frame also causes the image blurring situation after the terminal with the same video capability receives the video code stream.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, the present disclosure provides a video conference I-frame encoding method and apparatus.
In a first aspect, an embodiment of the present disclosure provides a video conference I-frame encoding method, where the method is applied to a video conference that is participated in by a multipoint control unit MCU and at least two terminals, and includes:
determining a terminal which sends a video updating command and has a frequency greater than a preset threshold value;
if the terminal is judged not to meet the preset condition, encoding an I frame according to the reserved VPU resource of the video processing unit;
and sending the I frame to the terminal.
Further, after encoding an I frame according to the VPU resource, the method further includes:
and if the frequency of the video updating command sent by the terminal is judged to be less than or equal to the threshold value, releasing the VPU resources.
Preferably, the terminal does not satisfy the preset condition, including:
the MCU transcodes the terminal by using the VPU resources, or transcodes the terminal by using the VPU resources but the terminal does not have unique coding capability.
Further, the video conference I-frame encoding method further includes: and if the terminal meets the preset condition, encoding an I frame and sending the I frame to the terminal.
Preferably, the terminal satisfies a preset condition, including:
the MCU transcodes the terminal by utilizing VPU resources, and the terminal has unique coding capability.
Preferably, the VPU resource is reserved from the media processor resource after receiving an I-frame-enabled preference policy control instruction, where the I-frame-enabled preference policy control instruction is received after each terminal joins the video conference.
Preferably, the encoding of the I-frame according to the reserved video processing unit VPU resources includes:
if the reserved VPU resources are judged to be sufficient, encoding an I frame according to the reserved VPU resources;
and if the reserved VPU resources are judged to be insufficient, acquiring the VPU resources matched with the video capability of the terminal, and encoding the I frame according to the acquired VPU resources.
In another aspect, an embodiment of the present disclosure further provides an I-frame encoding apparatus, including: the system comprises a multipoint control module, a media coding and decoding module and a sending module, wherein the multipoint control module is used for determining a terminal which sends a video updating command and has a frequency greater than a preset threshold value; judging whether the terminal meets a preset condition or not;
the media coding and decoding module is used for coding an I frame according to the reserved VPU resource when the terminal does not meet the preset condition;
the sending module is configured to send the I frame to the terminal.
In another aspect, an embodiment of the present disclosure further provides a computer device, including: one or more processors and storage; the video conference I-frame encoding method provided by the foregoing embodiments is implemented by one or more programs stored on a storage device, and when the one or more programs are executed by the one or more processors, the one or more processors implement the video conference I-frame encoding method provided by the foregoing embodiments.
The disclosed embodiments also provide a computer readable medium, on which a computer program is stored, wherein the computer program, when executed, implements the video conference I-frame encoding method provided in the foregoing embodiments.
According to the video conference I-frame encoding method provided by the embodiment of the disclosure, by determining the terminal which sends the video update command and has a frequency greater than the preset threshold, if the terminal is judged not to meet the preset condition, the I-frame is encoded according to the reserved VPU resource and sent to the terminal, namely, the I-frame optimization strategy is adopted in the method, the terminal which has a decoding error is isolated, and targeted encoding is carried out, so that other participating terminals cannot be influenced, the problem that all meeting places are influenced due to the problems of packet loss of part of terminals in the video conference and the like is solved, and the video transmission quality of the video conference is improved.
Drawings
FIG. 1 is a system architecture diagram provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a video conference I-frame encoding method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a video conference I-frame encoding method according to another embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for determining that a terminal meets a predetermined condition according to yet another embodiment of the present disclosure;
fig. 5 is a flow chart of encoding an I frame according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an I-frame encoding apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an I-frame encoding apparatus according to yet another embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an I-frame encoding apparatus according to another embodiment of the present disclosure.
Detailed Description
Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but which may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments described herein may be described with reference to plan and/or cross-sectional views in light of idealized schematic illustrations of the disclosure. Accordingly, the example illustrations can be modified in accordance with manufacturing techniques and/or tolerances. Accordingly, the embodiments are not limited to the embodiments shown in the drawings, but include modifications of configurations formed based on a manufacturing process. Thus, the regions illustrated in the figures have schematic properties, and the shapes of the regions shown in the figures illustrate specific shapes of regions of elements, but are not intended to be limiting.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
One embodiment of the present disclosure provides a video conference I-frame encoding method, which is applied to a system including an MCU and a plurality of terminals, as shown in fig. 1. The MCU is used for realizing video and audio data interaction with the participating terminals through a transmission network, the transmission network comprises but is not limited to a 5G network, and multipoint call management and conference control, management and coordination of media resources and coding and decoding processing of media can be realized; the conference manager can manage the video conference service through the conference television service management system installed on the MCU. The terminal is used for carrying out coding and decoding processing of media and forwarding and controlling of messages.
The following describes the method for encoding an I frame of a video conference according to an embodiment of the present disclosure in detail with reference to fig. 2. The method is applied to the video conference in which at least two terminals participate, specifically, a conference manager can select the terminals participating in the conference from a conference participating terminal list through a conference television service management system so as to hold the video conference, and certainly, each conference participating terminal can also actively call the MCU in various ways so as to join the video conference. After each participating terminal joins the video conference, the conference television service management system provides a conference manager with a plurality of conference modes to be selected, wherein one mode is a traditional mode, and the other mode is an I-frame preferred mode. If the conference administrator selects the I frame optimization mode, namely the conference television service management system sends an I frame optimization starting strategy control command to the MCU, the MCU reserves part of VPU resources from the media processor resources for encoding I frames. When the I-frame preferred mode is enabled, as shown in fig. 2, the method comprises the steps of:
and step 11, determining the terminal which sends the video updating command and has the frequency greater than a preset threshold value.
When a network problem (such as packet loss) occurs in the participating terminal and the complete image cannot be recovered, the participating terminal sends a video update command to the MUC to request the MCU to encode and send an I frame so that the participating terminal can recover the image according to the I frame. In this step, the MCU monitors the frequency of video update commands sent by each participating terminal, and screens out the terminals with higher frequency of sending video update commands.
And step 12, if the terminal is judged not to meet the preset condition, encoding the I frame according to the reserved VPU resource.
In this step, the multipoint control module of the MCU determines whether the terminal screened in step 11 meets a preset condition, and if not, the media codec module of the MCU encodes the I frame according to the reserved VPU resource. The following process of determining whether the terminal meets the preset condition by the MCU will be described in detail with reference to fig. 4.
And step 13, sending the I frame to the terminal.
In this step, the sending module of the MCU sends the encoded I frame to the terminal selected in step 11 (i.e., the terminal with the higher frequency of sending the video update command), so that other participating terminals do not receive the I frame, and thus do not affect other participating terminals.
As can be seen from steps 11 to 13, in the video conference I-frame encoding method provided in the embodiment of the present disclosure, by determining a terminal that sends a video update command at a frequency greater than a preset threshold, if it is determined that the terminal does not satisfy a preset condition, encoding an I-frame according to a reserved VPU resource, and sending the I-frame to the terminal, that is, the present disclosure adopts an I-frame optimization strategy to isolate a terminal that has a decoding error, and perform targeted encoding, so that other participating terminals are not affected, the problem that all meeting places are affected due to packet loss of some terminals in a video conference is solved, and video transmission quality of the video conference is improved.
In another embodiment of the present disclosure, as shown in fig. 3, after encoding the I-frame according to the VPU resources (i.e., step 13), the video conference I-frame encoding method may further include the steps of:
and step 13', if the frequency of the video update command sent by the terminal is judged to be less than or equal to the threshold value, releasing the VPU resources.
In this step, the MCU monitors the frequency of video update commands sent by each participating terminal in real time, and when it is found that the frequency of video update commands sent by participating terminals that once used to encode I frames with reserved VPU resources is reduced below a threshold, it indicates that the current network problem of the participating terminal is alleviated, and the MCU does not need to send I frames to the participating terminal any more, so that the VPU resources can be released for other video conferences. Specifically, the multi-point control module of the MCU issues a VPU resource release instruction to the media processing module, and the media processing module notifies the media codec module to release VPU resources, so that the participating terminal can recover to the original node resources after VPU resources are released.
When the problem of packet loss of part of the conferencing terminals is solved, the VPU resources can be applied for image restoration processing, and the applied VPU resources can be released after the frequency of the video update command sent by the conferencing terminals is recovered to be normal, so that the effective utilization of the video conferencing resources is ensured.
The following describes the process of determining whether the terminal satisfies the predetermined condition in detail with reference to fig. 4. As shown in fig. 4, the determining whether the terminal meets the preset condition specifically includes:
and step 41, judging whether the MCU transcodes the terminal by using the VPU resource, if so, executing step 42, otherwise, executing step 12.
In this step, if the multi-point control module of the MCU determines to transcode the terminal using the VPU resource, the MCU further determines whether the terminal has a unique coding capability (i.e., step 42); if the multi-point control module of the MCU determines that the VPU resource is not used for transcoding the terminal, indicating that the terminal does not satisfy the preset condition, the media codec module of the MCU encodes the I frame according to the reserved VPU resource (i.e., step 12 is executed).
Step 42, judging whether the terminal has the unique coding capability, if so, executing step 14; otherwise, step 12 is performed.
In this step, if the multi-point control module of the MCU determines that the terminal has the unique coding capability, that is, the MCU transcodes the code of the terminal using the VPU resource, and the terminal has the unique coding capability, which indicates that the terminal meets the preset condition, the media codec module of the MCU codes the I frame and sends the I frame to the terminal (i.e., step 14 is executed). If the multi-point control module of the MCU determines that the terminal does not have the unique coding capability, which indicates that the terminal does not satisfy the preset condition, the media codec module of the MCU codes the I frame according to the reserved VPU resource (i.e., step 12 is executed). That is, in step 41 and step 42, if the judgment condition of any one step is not satisfied, the terminal is considered not to satisfy the preset condition, and only if both the judgment conditions in step 41 and step 42 are satisfied, the terminal is considered to satisfy the preset condition.
And step 14, coding the I frame and sending the I frame to the terminal.
In some embodiments of the present disclosure, as shown in fig. 5, if it is determined that the terminal does not satisfy the preset condition, encoding an I frame according to the reserved VPU resource (i.e. step 12), specifically including:
step 121, determining whether the reserved VPU resources are sufficient, if so, executing step 122; otherwise, step 123 is performed.
In this step, if the MCU determines that the reserved VPU resources are sufficient to encode the I frame for the terminal, the I frame is encoded according to the reserved VPU resources (i.e., step 122 is executed). Specifically, when the multipoint control module of the MCU determines that the reserved VPU resources are sufficient, the media codec module of the MCU encodes the I frame according to the reserved VPU resources. If the MCU determines that the reserved VPU resource is not enough to encode the I frame for the terminal, it applies for VPU resource (i.e., step 123 is executed). Specifically, when the multipoint control module of the MCU judges that the reserved VPU resource is insufficient, the media processing module of the MCU acquires a VPU resource matched with the video capability of the terminal, and the media coding and decoding module of the MCU codes the I-frame according to the VPU resource acquired by the media processing module.
Step 122, encode the I-frame according to the reserved VPU resources.
And step 123, acquiring VPU resources matched with the video capability of the terminal.
In this step, the multi-point control module of the MCU sends a VPU resource application message to the media processing module, and the media processing module forwards the VPU resource application message to the media codec module, so that the media codec module obtains a VPU resource matching the video capability of the terminal.
And step 124, encoding the I frame according to the obtained VPU resources.
In this step, the media codec module encodes the I frame according to the obtained VPU resource.
According to the video conference system and the video conference method, the media streams of the MCU and the terminal are adjusted in real time in the video conference process, the purpose of increasing the smooth experience of video and audio of a user is achieved, the scheme is simple to realize, and other modules are not required to be additionally added.
Based on the same technical concept, an embodiment of the present disclosure further provides an I-frame encoding apparatus, as shown in fig. 6, the I-frame encoding apparatus includes: a multipoint control module 61, a media codec module 62 and a sending module 63. The multipoint control module 61 is configured to determine a terminal that sends a video update command at a frequency greater than a preset threshold; and judging whether the terminal meets a preset condition or not.
The media codec module 62 is configured to, when the terminal does not satisfy the preset condition, encode the I frame according to the reserved VPU resource of the video processing unit.
The sending module 63 is configured to send the I frame to the terminal.
Further, in some embodiments of the present disclosure, as shown in fig. 7, the I-frame encoding apparatus further includes a releasing module 64, where the releasing module 64 is configured to release the VPU resource when the frequency of sending video update commands by the terminal is less than or equal to the threshold.
Further, in some embodiments of the present disclosure, the media codec module 62 is further configured to encode an I frame when the multipoint control module 61 determines that the terminal meets the preset condition.
Further, in some embodiments of the present disclosure, as shown in fig. 8, the I-frame encoding apparatus further includes a receiving module 65 and a media processing module 66, where the receiving module 65 is configured to receive an I-frame-preference-enabling policy control instruction, where the I-frame-preference-enabling policy control instruction is received after each terminal joins the video conference.
The media processing module 66 is configured to reserve a part of the media processor resources as VPU resources after the receiving module receives the I-frame-enabled preferred policy control instruction.
In some embodiments of the present disclosure, the multipoint control module 61 is configured to determine whether the reserved VPU resources are sufficient.
The media processing module 66 is further configured to obtain a VPU resource matching the video capability of the terminal when the reserved VPU resource is insufficient.
The media encoding and decoding module 62 is configured to encode the I frame according to the reserved VPU resource when the reserved VPU resource is sufficient; and encoding the I frame according to the VPU resource acquired by the media processing module.
An embodiment of the present disclosure further provides a computer device, including: one or more processors and storage; the video conference I-frame encoding method provided by the foregoing embodiments is implemented by one or more programs stored on a storage device, and when the one or more programs are executed by the one or more processors, the one or more processors implement the video conference I-frame encoding method provided by the foregoing embodiments.
The disclosed embodiments also provide a computer readable medium, on which a computer program is stored, wherein the computer program, when executed, implements the video conference I-frame encoding method provided in the foregoing embodiments.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods disclosed above, functional modules/units in the apparatus, may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. It will, therefore, be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims (10)

1. A video conference I-frame encoding method applied to a video conference participated by a Multipoint Control Unit (MCU) and at least two terminals comprises the following steps:
determining a terminal which sends a video updating command and has a frequency greater than a preset threshold value;
if the terminal is judged not to meet the preset condition, encoding an I frame according to the reserved VPU resource of the video processing unit;
and sending the I frame to the terminal.
2. The method of claim 1, wherein after encoding an I-frame according to the VPU resources, further comprising:
and if the frequency of the video updating command sent by the terminal is judged to be less than or equal to the threshold value, releasing the VPU resources.
3. The method of claim 1, wherein the terminal does not satisfy a preset condition, comprising:
the MCU transcodes the terminal by using the VPU resources, or transcodes the terminal by using the VPU resources but the terminal does not have unique coding capability.
4. The method of claim 1, wherein the method further comprises: and if the terminal meets the preset condition, encoding an I frame and sending the I frame to the terminal.
5. The method of claim 4, wherein the terminal satisfies a preset condition, comprising:
the MCU transcodes the terminal by utilizing VPU resources, and the terminal has unique coding capability.
6. The method of claim 1, wherein the VPU resources are reserved from media processor resources upon receiving an enable I-frame-preferred policy control instruction received after each terminal joins the video conference.
7. The method of claim 1, wherein the encoding of the I-frame according to reserved Video Processing Unit (VPU) resources comprises:
if the reserved VPU resources are judged to be sufficient, encoding an I frame according to the reserved VPU resources;
and if the reserved VPU resources are judged to be insufficient, acquiring the VPU resources matched with the video capability of the terminal, and encoding the I frame according to the acquired VPU resources.
8. An I-frame encoding apparatus, comprising: the system comprises a multipoint control module, a media coding and decoding module and a sending module, wherein the multipoint control module is used for determining a terminal which sends a video updating command and has a frequency greater than a preset threshold value; judging whether the terminal meets a preset condition or not;
the media coding and decoding module is used for coding an I frame according to the reserved VPU resource when the terminal does not meet the preset condition;
the sending module is configured to send the I frame to the terminal.
9. A computer device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the video conference I-frame encoding method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein said program when executed implements a video conference I-frame encoding method according to any one of claims 1-7.
CN201911077748.0A 2019-11-06 2019-11-06 Video conference I frame coding method and device Active CN112770077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911077748.0A CN112770077B (en) 2019-11-06 2019-11-06 Video conference I frame coding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911077748.0A CN112770077B (en) 2019-11-06 2019-11-06 Video conference I frame coding method and device

Publications (2)

Publication Number Publication Date
CN112770077A true CN112770077A (en) 2021-05-07
CN112770077B CN112770077B (en) 2023-11-10

Family

ID=75692699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911077748.0A Active CN112770077B (en) 2019-11-06 2019-11-06 Video conference I frame coding method and device

Country Status (1)

Country Link
CN (1) CN112770077B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668158A (en) * 2008-09-05 2010-03-10 中兴通讯股份有限公司 Video switching processing method and multi-point control unit
CN101710962A (en) * 2009-12-22 2010-05-19 中兴通讯股份有限公司 Method and device for distributing video conference resources
US20110261147A1 (en) * 2010-04-27 2011-10-27 Ashish Goyal Recording a Videoconference Using a Recording Server
US20110310217A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Reducing use of periodic key frames in video conferencing
US20130162758A1 (en) * 2011-12-27 2013-06-27 Electronics And Telecommunications Research Institute Video conference control system and method for reserving video conference

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668158A (en) * 2008-09-05 2010-03-10 中兴通讯股份有限公司 Video switching processing method and multi-point control unit
CN101710962A (en) * 2009-12-22 2010-05-19 中兴通讯股份有限公司 Method and device for distributing video conference resources
US20110261147A1 (en) * 2010-04-27 2011-10-27 Ashish Goyal Recording a Videoconference Using a Recording Server
US20110310217A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Reducing use of periodic key frames in video conferencing
US20130162758A1 (en) * 2011-12-27 2013-06-27 Electronics And Telecommunications Research Institute Video conference control system and method for reserving video conference

Also Published As

Publication number Publication date
CN112770077B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
EP2863632B1 (en) System and method for real-time adaptation of a conferencing system to current conditions of a conference session
US8180915B2 (en) Coding data streams
US6986158B1 (en) System and method for distributing video information over network
Baccichet et al. Low-delay peer-to-peer streaming using scalable video coding
US20080100694A1 (en) Distributed caching for multimedia conference calls
CN100454820C (en) MCU cascade system and establishing and communication method for the same
RU2516597C2 (en) Method and system for unified management of service channels and streaming media services on demand
JP5650197B2 (en) Method and apparatus for a system for providing media through multicast distribution
US9948889B2 (en) Priority of uplink streams in video switching
US9071727B2 (en) Video bandwidth optimization
KR20180031547A (en) Method and apparatus for adaptively providing multiple bit rate stream media in server
CA2758763C (en) Method and device for fast pushing unicast stream in fast channel change
CN110971863B (en) Multi-point control unit cross-area conference operation method, device, equipment and system
WO2021052077A1 (en) Videoconferencing method, first terminal, mcu, system, and storage medium
JP2019530996A (en) Method and apparatus for use of compact parallel codec in multimedia communications
CN111147860B (en) Video data decoding method and device
CN111741248B (en) Data transmission method, device, terminal equipment and storage medium
US20130135427A1 (en) Techniques For a Rate-Adaptive Video Conference Bridge
WO2013132360A1 (en) Identifying and transitioning to an improved voip session
US9374232B2 (en) Method and a device for optimizing large scaled video conferences
WO2009123997A1 (en) Video swithching without instantaneous decoder refresh-frames
CN114600468A (en) Combining video streams with metadata in a composite video stream
CN108206925B (en) Method and device for realizing multichannel video call and multichannel terminal
WO2017003768A1 (en) Methods and apparatus for codec negotiation in decentralized multimedia conferences
KR102340490B1 (en) Platform system and method of transmitting real time video for an ultra low latency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant