WO2019128266A1 - 视频会议的传输方法及装置、mcu - Google Patents

视频会议的传输方法及装置、mcu Download PDF

Info

Publication number
WO2019128266A1
WO2019128266A1 PCT/CN2018/101956 CN2018101956W WO2019128266A1 WO 2019128266 A1 WO2019128266 A1 WO 2019128266A1 CN 2018101956 W CN2018101956 W CN 2018101956W WO 2019128266 A1 WO2019128266 A1 WO 2019128266A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
conference
video
mcu
video conference
Prior art date
Application number
PCT/CN2018/101956
Other languages
English (en)
French (fr)
Inventor
孟军
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP18895218.8A priority Critical patent/EP3734967A4/en
Priority to US16/958,780 priority patent/US20200329083A1/en
Publication of WO2019128266A1 publication Critical patent/WO2019128266A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures
    • H04L65/1089In-session procedures by adding media; by removing media
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/402Support for services or applications wherein the services involve a main real-time session and one or more additional parallel non-real time sessions, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services
    • H04L65/4025Support for services or applications wherein the services involve a main real-time session and one or more additional parallel non-real time sessions, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services where none of the additional parallel sessions is real time or time sensitive, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/155Conference systems involving storage of or access to video conference sessions

Definitions

  • This document relates to the field of communications, for example, to a method and apparatus for transmitting video conferences, and an MCU.
  • the video conferencing system in the related art is a remote communication system that supports bidirectional transmission of voice and video. Through this system, users in different places can complete real-time sound and video communication that approximate face-to-face effects.
  • the video conferencing system must have a video conferencing terminal and a Multipoint Control Unit (MCU).
  • the terminal is a device used by the user.
  • the terminal collects the user's voice and video data, and sends it to the remote end via the network, and simultaneously receives the remote sound and video data from the network to the user.
  • the MCU is responsible for multi-party conference management, conference terminal sound, and video data exchange and mixing.
  • Videoconferencing in the related art mostly uses IP networks for data transmission of video conferences.
  • the higher the bandwidth the more data can be transmitted, thereby providing better quality of service.
  • bandwidth can generally be met as needed, but for Internet or leased private line networks, bandwidth resources are very limited, and the higher the bandwidth, the higher the user's cost of use.
  • the video conference is set up in a manner that each video terminal establishes a connection with the MCU, and the terminal uploads its own audio and video data to the MCU, and the MCU transmits the conference audio and video data to the terminal, and each terminal needs to occupy the uplink and downlink symmetric bandwidth.
  • the uplink and downlink bandwidth of the MCU is 200M.
  • the uplink and downlink bandwidth of the MCU is 200M.
  • the embodiment of the present invention provides a method and device for transmitting a video conference, and an MCU.
  • a method for transmitting a video conference including: indicating that a video data of a first terminal joining a video conference is sent to a multicast address of the video conference, and the audio data of the first terminal is sent. Go to the multipoint control unit MCU; instruct the conference terminal to transmit the audio data of the conference terminal to the MCU, where the conference terminal is a terminal other than the first terminal in the video conference.
  • another method for transmitting a video conference including: receiving audio data of all participating terminals in a video conference, and video data of a first terminal in all participating terminals; The data is transmitted to all participating terminals, and the video data is transmitted to terminals other than the first terminal in the video conference.
  • a multipoint control unit including: a first indication module, configured to indicate that a video data of a first terminal joining a video conference is sent to a multicast address of the video conference, The audio data of the first terminal is sent to the multipoint control unit (MCU), and the second indication module is configured to instruct the conference terminal to send the audio data of the conference terminal to the MCU, where the conference terminal is the video conference a terminal other than the first terminal.
  • another video conference transmission apparatus including: a receiving module, configured to receive audio data of all participating terminals in a video conference, and video of a first terminal in all participating terminals And a sending module configured to send the audio data to all participating terminals, and send the video data to a terminal other than the first terminal in the video conference.
  • a storage medium having stored therein a computer program, wherein the computer program is configured to execute the steps of any one of the method embodiments described above.
  • an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor being arranged to run the computer program to perform any of the above methods The steps in the examples.
  • FIG. 1 is a network architecture diagram of an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for transmitting a video conference according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method of transmitting a video conference according to another embodiment of the present disclosure
  • FIG. 4 is a structural block diagram of a multipoint control unit MCU according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a transmission apparatus for a video conference according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a video conference system according to an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of a video conference held under low bandwidth conditions in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing the flow of data of a low bandwidth video conference according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a multicast media channel in which a low bandwidth video conference opens a sending direction according to an embodiment of the present invention
  • FIG. 10 is a flowchart of a work process when a video conference changes a broadcast source according to an embodiment of the present invention.
  • FIG. 11 is a flow chart showing the operation of the video conference to view the terminal viewed by the broadcast source in an embodiment of the present invention.
  • FIG. 1 is a network architecture diagram of an embodiment of the present disclosure, where the network architecture includes an MCU and at least one terminal. During the video conference, the terminals interact with each other through the MCU in real time.
  • FIG. 2 is a flowchart of a method for transmitting a video conference according to an embodiment of the present disclosure. As shown in FIG. 2, the process includes the steps. S202 and step S204.
  • step S202 the video data of the first terminal that joins the video conference is sent to the multicast address of the video conference, and the audio data of the first terminal is sent to the multipoint control unit MCU.
  • step S204 the conference terminal is instructed to send the audio data of the conference terminal to the MCU, where the conference terminal is a terminal other than the first terminal in the video conference.
  • the network resources used by some terminals in the video conference to transmit the video data can be saved, and the video conference in the related technology can avoid excessive network resources. Phenomenon, reduce the occupation of network bandwidth, reduce network congestion, and improve the utilization of network resources.
  • the execution body of the foregoing steps may be a management device of a video conference, a control device, an MCU, a server, etc., but is not limited thereto.
  • step S202 and step S204 are interchangeable, that is, step S204 may be performed first, and then S202 is performed.
  • the method further includes: controlling the conference terminal to refuse to send the uplink video data. It can be implemented by controlling the uplink bandwidth to be 0, not allocating the uplink bandwidth, or controlling the conference terminal not to transmit the uplink video data.
  • the method further comprises: transmitting video data of the second terminal joining the video conference to the first terminal.
  • the second terminal may be any terminal other than the first terminal in the video conference, and may be specified by a policy or randomly assigned.
  • sending the video data of the second terminal that joins the video conference to the first terminal includes: sending the video data of the second terminal that joins the video conference to the MCU, and sending the data to the first terminal through the MCU.
  • the second terminal may also send the video data directly to the first terminal.
  • the video data of the first terminal includes: video data collected by the first terminal, and video data of the second terminal received by the first terminal.
  • the collected video data may be video data of the first terminal collected by the camera, corresponding to the user of the first terminal; the received video data of the second terminal is external video data, and is uniformly broadcasted to other terminals through the first terminal. Can reduce the transmission bandwidth.
  • the method further includes: transmitting the audio data received by the MCU to the first terminal and the conference terminal, and receiving the multicast address.
  • the video data is sent to the conference terminal.
  • sending the audio data received by the MCU to the first terminal and the conference terminal comprises: performing mixing processing on all audio data received by the MCU, and transmitting the audio data after the mixing processing to the first terminal and the conference terminal .
  • the method further includes: determining the designated terminal in the video conference, and sending the video data of the designated terminal to the multicast. Address, where the designated terminal is the terminal currently speaking in the video conference.
  • the designated terminal defaults to the first terminal at the beginning of the video conference, and after the video conference starts, switches to the terminal corresponding to the user who needs to display the screen according to the conference situation of the video conference, such as the terminal currently speaking, the terminal corresponding to the chairman station. Wait.
  • the method further includes: controlling the first terminal to refuse to send the uplink video data.
  • the method before the video data of the first terminal that is added to the video conference is sent to the multicast address of the video conference, and the audio data of the first terminal is sent to the multi-point control unit (MCU), the method further includes: creating a video conference, And configure the multicast address of the video conference.
  • MCU multi-point control unit
  • the method before the video data indicating the first terminal joining the video conference is sent to the multicast address of the video conference, the method further includes: setting at least one first terminal, wherein the at least one first terminal is different
  • the location area can be applied to a distributed network.
  • sending the video data received by the MCU to the conference terminal by using the multicast address comprises: encoding the video data received by the MCU to obtain video data of at least one format, where the video data of different formats is correspondingly transmitted.
  • the bandwidth is different; the encoded video data is sent to the conference terminal through at least one multicast address.
  • FIG. 3 is a flowchart of another method for transmitting a video conference according to an embodiment of the present disclosure. As shown in FIG. 3, the process includes Step S302 and step S304.
  • step S302 audio data of all participating terminals in the video conference and video data of the first terminal in all participating terminals are received.
  • step S304 the audio data is transmitted to all the participating terminals, and the video data is transmitted to the terminal other than the first terminal in the video conference.
  • a video conference transmission apparatus is further provided, which is configured to implement the foregoing method embodiments, and details are not described herein.
  • 4 is a structural block diagram of a multipoint control unit MCU according to an embodiment of the present invention. As shown in FIG. 4, the apparatus includes a first indication module 40 and a second indication module 42.
  • the first indication module 40 is configured to indicate that the video data of the first terminal that joins the video conference is sent to the multicast address of the video conference, and the audio data of the first terminal is sent to the multi-point control unit MCU.
  • the second indication module 42 is configured to instruct the conference terminal to send the audio data of the conference terminal to the MCU, where the conference terminal is a terminal other than the first terminal in the video conference.
  • FIG. 5 is a structural block diagram of a transmission apparatus of a video conference according to an embodiment of the present invention. As shown in FIG. 5, the receiving module 50 and the transmitting module 52 are further included.
  • the receiving module 50 is configured to receive audio data of all participating terminals in the video conference, and video data of the first terminal in all the participating terminals.
  • the sending module 52 is configured to send the audio data to all the participating terminals, and send the video data to the terminal other than the first terminal in the video conference.
  • each of the above modules may be implemented by software or hardware.
  • the foregoing may be implemented by, but not limited to, the foregoing modules are all located in the same processor; or, the above multiple modules are The form of any combination is located in a different processor.
  • the embodiment of the present application further clarifies the solution of the present application in combination with the scenario.
  • An embodiment of the present invention provides a method and a device for implementing large-scale networking in a low-bandwidth condition in a multi-party video conference, thereby implementing a large-scale multi-party video conference networking under low-bandwidth conditions, and avoiding the existence of the related video conference network.
  • Large-scale multi-party video conferencing cannot be held under extremely limited bandwidth resources.
  • the device for video conference includes: a video conference MCU, a video conference service management system, and a conference terminal (where the conference terminal may include a conference room type hardware terminal, a personal computer (PC) soft terminal, and a webpage real-time. (Web Real-Time Communication (web RTC) terminal and mobile device soft terminal, etc.), video conferencing low bandwidth control module.
  • a conference terminal may include a conference room type hardware terminal, a personal computer (PC) soft terminal, and a webpage real-time. (Web Real-Time Communication (web RTC) terminal and mobile device soft terminal, etc.), video conferencing low bandwidth control module.
  • web RTC Web Real-Time Communication
  • network multicast, network unicast, and traffic control technologies are used to control large-scale low-bandwidth networking from conference creation, conference convening, conference terminal access conference, video source control, and bandwidth control. .
  • a method for implementing large-scale multi-party video conference networking under low bandwidth conditions includes the following eight steps.
  • the user creates a conference in the video conference service management system, selects the conference terminal, and specifies the conference as a low-bandwidth conference mode.
  • the "low-bandwidth conference” can have different description modes.
  • the core content is to save bandwidth resources, and the conventional The meeting is different.
  • the second step is to configure low bandwidth conference parameters, including multicast address, primary video multicast port, and secondary video multicast port.
  • the video conference service management system sends the conference information to the MCU, and the MCU calls each conference terminal to join the conference.
  • the terminal instructs the terminal to perform flow control on the uplink video.
  • the video conference low bandwidth control module sets the first terminal that joins the conference (corresponding to the first terminal in the foregoing embodiment) as a broadcast source, and indicates that the video media data of the broadcast source terminal is sent to the set multicast.
  • the address and audio data are sent to the MCU, and the audio and video media receiving source of the terminal is set to be an MCU.
  • the video conference low bandwidth control module sets the second terminal to join the conference as the terminal viewed by the broadcast source (corresponding to the second terminal in the foregoing embodiment), and the audio and video data of the terminal viewed by the broadcast source are The data is sent to the MCU, and the video data is forwarded by the MCU to the broadcast source terminal, and the video media receiving address is a multicast address.
  • the video conference low bandwidth control module sets the video media receiving address of the broadcast source terminal and other terminals other than the broadcast source terminal viewing terminal to the set multicast address, and the audio data is sent to the MCU to control the uplink of the terminals.
  • the video data is 0, that is, the terminals do not send the local video data to the MCU, and the audio data of all terminals is sent to each terminal after the MCU performs the mixing process.
  • the video conference low bandwidth control module restores the video uplink bandwidth of the current speaking terminal to the original bandwidth
  • the video receiving address is set to the MCU
  • the uplink video bandwidth of the original broadcast source terminal is controlled to 0.
  • the video receiving address is set to a multicast address.
  • the video conference low bandwidth control module restores the video uplink bandwidth of the terminal viewed by the current broadcast source to the original bandwidth, and controls the uplink video bandwidth of the originally selected terminal as 0.
  • the sequence of steps convened in the above meeting may be appropriately adjusted.
  • the initial selection of the terminal viewed by the broadcast source and the broadcast source may have different rules, but the two video sources of the conference cannot be lacking.
  • the uplink bandwidth required for a conference is: two video bandwidths (terminals viewed by the broadcast source and the broadcast source) and the total audio bandwidth of all terminals
  • the downlink bandwidth is: one video bandwidth ( Broadcast source) and the total audio bandwidth of all terminals.
  • the conference bandwidth is 2M, and the method and device of the embodiment are used.
  • the uplink video bandwidth of the MCU is about 4M, and the downlink video bandwidth is about 4M.
  • the uplink and downlink bandwidths of the MCU are both about 2000M.
  • the application can greatly reduce the bandwidth resources required for video conferences, and as the number of participating terminals increases, the required bandwidth only increases the corresponding amount of audio bandwidth, which can alleviate the shortage of bandwidth resources.
  • this embodiment can flexibly realize the interaction between the terminals.
  • Each conference terminal can be viewed by the main site or the owner as needed, or can perform voice communication at any time, and at the same time
  • the upstream bandwidth of the conference terminal viewed by the broadcast source is managed by the traffic control technology, which can greatly reduce the uplink bandwidth of the video conference terminal to the MCU direction.
  • the audio data of the conference can also be controlled like video data to further reduce the bandwidth, but each conference terminal cannot speak at any time and needs manual control by the conference controller. This embodiment does not describe the audio portion processing in detail.
  • Embodiment 3 also includes the following implementation examples:
  • FIG. 6 is a schematic structural diagram of a video conference system according to the embodiment.
  • the entire system includes a video conference service management system, a video conference MCU, and a certain number of video conference terminals.
  • the video conference low bandwidth control module in the MCU is added in this embodiment. Device.
  • the video conference service management system is an operation interface for holding a video conference, and the conference caller user creates and manages the conference through the interface.
  • the video conference multi-point control unit is a core device in a video conference, and is mainly responsible for signaling and code stream processing with each video conference terminal or other MCU.
  • the video and video images are collected by the conference television terminal, they are compressed by the video conference coding algorithm and sent to the remote MCU or the video conference terminal through the IP network. After receiving the code stream, the remote video conference terminal is decoded and played to the user.
  • the video conference low bandwidth control module controls the video data bandwidth and the transmission direction of the terminal in the conference through the video conference standard protocol, so as to reduce the uplink and downlink bandwidth of the MCU, thereby implementing large-scale video conference under low bandwidth conditions.
  • FIG. 7 is a flowchart of a video conference held under the condition of low bandwidth according to the embodiment.
  • the conference caller sets the conference basic information such as the conference name, conference bandwidth, audio and video format, and the list of terminals that need to participate in the conference.
  • Set information related to low bandwidth control of the video conference including multicast address, primary video multicast port, and secondary video multicast port.
  • the conference service management system sends the conference information to the MCU, and the MCU holds the conference.
  • the MCU allocates the MCU media and network processing resources according to the conference information.
  • the MCU creates a multicast packet and calls the terminal participating in the conference to access the conference one by one.
  • there are two main video sources in the conference one broadcast source terminal and one broadcast source terminal.
  • the video of the broadcast source terminal is viewed by other terminals in the conference, and the broadcast source terminal views another terminal, which is defined as the terminal viewed by the broadcast source.
  • the two video sources can be dynamically switched to other terminals during the conference.
  • the MCU When the first terminal accesses the conference, the MCU sets the terminal as the conference broadcast source, and the video conference low bandwidth control module notifies the terminal to send the video data to the MCU, and the MCU receives the video of the broadcast source terminal and forwards the video to the multicast address. Used by terminals in the multicast group to watch.
  • Step 6 When the second terminal accesses the conference, the MCU sets the terminal as the terminal viewed by the conference broadcast source, and the video conference low bandwidth control module notifies the terminal to send the video data to the MCU, and the MCU sends the video of the second terminal. To the broadcast source terminal for viewing by the broadcast source terminal. At the same time, the video conference low bandwidth control module adds the terminal to the multicast group, and the terminal receives the video forwarded by the multicast address and plays the video.
  • the MCU determines that there is already a broadcast source and a terminal viewed by the broadcast source, and the video conference low bandwidth control module directly adds them to the multicast address, and controls the video uplink bandwidth to be 0, that is, no need
  • These terminals send video data to the MCU, which accepts the conference multicast video and plays it.
  • the multicast attribute needs to be indicated.
  • the H.323 protocol is used as an example (other communication protocols are also applicable).
  • the key parameters of the multicast capability set include the receive multipoint capability and the multipoint capability. (transmit Multipoint Capability) and receive and transmit multipoint capability (receive And Transmit Multipoint Capability) three, according to whether the terminal is the broadcast source, the end of the broadcast source or the properties of the ordinary terminal, can be achieved by the following code:
  • the audio data of all terminals in the conference is sent to the MCU, which is processed by the MCU as needed.
  • the audio received by each terminal is the audio data synthesized by the MCU for each terminal.
  • the video conference at low bandwidth is completed. It can be seen from the above process that no matter how many terminals participate in the conference, the video data of the uploaded MCU has only two channels, and the audio data is the audio of all terminals. There are only two channels of video data sent by the MCU, and the audio data is the audio of all terminals. A large-scale video conference is held while occupying extremely low bandwidth of the MCU.
  • the video data of the terminal viewed by the broadcast source and the broadcast source in the conference is sent to the MCU, and the MCU broadcasts the source terminal.
  • the video data is sent to the multicast address, and the terminal video viewed by the broadcast source is sent to the broadcast source terminal.
  • the code stream received by other conference terminals is conference multicast data, and does not need to be sent by the MCU.
  • the audio data in the conference is still transmitted between the MCU and the video conferencing terminal in a conventional unicast manner.
  • FIG. 9 is a flowchart of a multicast media channel in which a low bandwidth video conference is opened in a sending direction according to the embodiment.
  • the MCU's Multipoint Controllor (MC) layer (or protocol stack) initiates the process of opening a logical channel.
  • MC Multipoint Controllor
  • the protocol stack determines whether the multicast address is assigned by the party based on the result of the master-slave decision. If it is the primary status, when the upper layer requests the RTCP address, the upper layer is required to allocate a multicast address (Real Time Transport Protocol (RTP), Real Time Transport Control Protocol (RTCP).
  • RTP Real Time Transport Protocol
  • RTCP Real Time Transport Control Protocol
  • the MC layer forwards the request message.
  • the application layer checks the channel parameters. If multicast and real-time transport protocol address (RTPAddress) is supported, the real-time transport control protocol port (RTCPPort), and the real-time transport protocol port (RTPPort) are all 0, then the multicast address is requested, and the MC layer is requested. Respond to RTP and RTCP addresses. The RTP and RTCP addresses are placed in the local address information field. Otherwise, the other party has assigned RTP and RTCP multicast addresses, and the multicast address is given in the RTPAddress, RTCPPort, and RTPPort fields. These multicast addresses are returned when responding to the MC layer.
  • RTPAddress multicast and real-time transport protocol address
  • RTCPPort real-time transport control protocol port
  • RTPPort real-time transport protocol port
  • the protocol stack sends an OpenLogicalChannel request, and the RTP and RTCP multicast addresses are given in the forward logical channel parameters (forwardLogicalChannelParameters) field.
  • the protocol adaptation layer of the terminal requests the RTP and RTCP addresses from the MC. Since the MCU allocates a multicast address, the multicast address is reported to the MC layer in the RTPAddress, RTCPPort, and RTPPort.
  • the application layer of the terminal checks the parameters. If multicast is supported and RTPAddress, RTCPPort, and RTPPort are all 0, then the multicast address is requested, and the RTP and RTCP addresses are responded to the MC layer. The RTP and RTCP addresses are placed in the local address information field. Otherwise, the other party has assigned RTP and RTCP multicast addresses, and the multicast address is given in the RTPAddress, RTCPPort, and RTPPort fields. These multicast addresses are returned when responding to the MC layer.
  • the protocol stack of the terminal sends an OpenLogical Channel Ack message to the MCU, and the RTP and RTCP addresses are the multicast addresses allocated by the MCU.
  • the protocol stack of the MCU reports to the application layer that the channel is successfully opened.
  • the MC layer of the MCU indicates to the application layer the RTP and RTCP multicast addresses of the other party.
  • the MC layer of the MCU reports to the application layer that the application layer channel has been opened.
  • the MCU sets the new conference terminal as the broadcast source; the video conference has low bandwidth.
  • the control module restores the uplink bandwidth of the new broadcast source terminal, and the terminal starts to send the video to the MCU; the video conference low bandwidth control module controls the current uplink bandwidth of the old broadcast source to 0; the MCU joins the old broadcast source terminal to the conference multicast group, and Other terminals receive video from the multicast address; the MCU sends the video data sent by the new broadcast source to the multicast address. What other terminals see will be the new broadcast source video; the switching of the broadcast source is completed.
  • FIG. 11 is a flowchart of a terminal viewed when a video conference changes a broadcast source according to the embodiment, and the terminal viewed by the broadcast source during the conference is also switched according to requirements, for example, interacting with a terminal, in a broadcast source.
  • the MCU sets the new terminal as the terminal viewed by the new broadcast source;
  • the video conference low bandwidth control module restores the uplink bandwidth of the new viewed terminal, and the video development of the terminal is sent to the MCU;
  • the video conference low bandwidth control module controls the terminal uplink bandwidth of the current old broadcast source to be 0;
  • the MCU sends the video data sent by the terminal to the broadcast source; and completes the handover of the terminal viewed by the broadcast source.
  • Multi-screen conference means that the MCU combines multiple conference terminal videos into one screen and then sends them to other terminals for viewing.
  • the multi-screen conference only needs the video conference low-bandwidth control module to recover the uplink bandwidth of the terminal to be synthesized.
  • the code streams of the video conference terminals are uploaded to the MCU for synthesis, and the synthesized picture is sent. Go to the multicast address.
  • the uplink bandwidth increase of the network is the synthesized terminal bandwidth and the downlink bandwidth does not change.
  • the physical location of video conferencing terminals is generally dispersed and concentrated, for example, national and municipal conferences are held.
  • the conference terminals of such conferences are distributed in provincial and multiple cities. If conferences are held in the usual way, the conference terminals in each city will occupy the network between provinces and cities, which will greatly occupy the line bandwidth between provinces and cities, and will lead to network congestion.
  • multiple multicast sources are added to the conference multicast, and the MCU sends one multicast data to each area.
  • the terminal in the area obtains the conference video stream from the local multicast source.
  • the downlink bandwidth between provinces and cities is one video bandwidth and the bandwidth occupied by all terminal audio.
  • the video conference low-bandwidth control module controls the bandwidth of all the terminals in the conference.
  • the downlink bandwidth of the MCU processing video is the sum of the bandwidth sent to the broadcast source and the multicast transmission bandwidth sent to each area.
  • the uplink bandwidth is still two. : The end of the broadcast source and broadcast source.
  • the uplink and downlink of the cascaded lines between the MCUs occupy one bandwidth resource.
  • the conference on each MCU is organized using the same technique as in Embodiment 1.
  • a multi-capability conference is required, that is, the conference terminal joins the conference with different capabilities.
  • the MCU needs to send video of the same capability to conference terminals of different capabilities to ensure that the conference terminal can view the conference normally.
  • the MCU first needs to group the terminals according to capabilities. For example, a terminal with a resolution of 1080P is a group, a terminal with a resolution of 720P is a group, and a Common Intermediate Format (CIF) is used.
  • CIF Common Intermediate Format
  • the MCU encodes the broadcast source video into these capabilities and sends them to the corresponding ones.
  • Multicast source The video conferencing terminal also joins the corresponding multicast source according to its own capabilities, and obtains a video matching with itself for viewing.
  • the video conference low-bandwidth control module controls the bandwidth of all the terminals in the conference.
  • the downlink bandwidth of the MCU processing video is the sum of the bandwidth sent to the broadcast source and the bandwidth sent to each multicast source.
  • the uplink bandwidth is still two ways: The end of the broadcast source and broadcast source.
  • the method and the device in the embodiment can effectively reduce the network bandwidth occupation and reduce the network congestion.
  • the large-scale increase in the number of terminals participating in the conference does not significantly increase the network overhead.
  • Embodiments herein also provide a storage medium comprising a stored program, wherein the program described above executes the method of any of the above.
  • the storage medium may be arranged to store program code set to perform steps S1 and S2.
  • step S1 the video data of the first terminal that joins the video conference is sent to the multicast address of the video conference, and the audio data of the first terminal is sent to the multipoint control unit MCU.
  • step S2 the conference terminal is instructed to send the audio data of the conference terminal to the MCU, where the conference terminal is a terminal other than the first terminal in the video conference.
  • the foregoing storage medium may include, but is not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk a magnetic disk
  • optical disk a variety of media that can store program code.
  • Embodiments herein also provide an electronic device comprising a memory and a processor having a computer program stored therein, the processor being arranged to execute a computer program to perform the steps of any one of the method embodiments above.
  • the electronic device may further include a transmission device and an input and output device, wherein the transmission device is connected to the processor, and the input and output device is connected to the processor.
  • the processor may be arranged to perform steps S1 and S2 by a computer program.
  • step S1 the video data of the first terminal that joins the video conference is sent to the multicast address of the video conference, and the audio data of the first terminal is sent to the multipoint control unit MCU.
  • step S2 the conference terminal is instructed to send the audio data of the conference terminal to the MCU, where the conference terminal is a terminal other than the first terminal in the video conference.
  • the examples in this embodiment may refer to the examples described in the foregoing embodiments and the optional embodiments, and details are not described herein again.
  • Each module or step of the process can be implemented by a general-purpose computing device, which can be centralized on a single computing device or distributed over a network of multiple computing devices. In one embodiment, they can be used.
  • the program code executable by the computing device is implemented such that they can be stored in the storage device by the computing device, and in some cases, the steps shown or described can be performed in a different order than herein. Either they are separately fabricated into a plurality of integrated circuit modules, or a plurality of modules or steps thereof are fabricated into a single integrated circuit module. As such, this article is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本文提供了一种视频会议的传输方法及装置、MCU,其中,该方法包括:指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU;指示会议终端将所述会议终端的音频数据发送至MCU,其中,会议终端为视频会议中除第一终端之外的终端。

Description

视频会议的传输方法及装置、MCU
本申请要求在2017年12月28日提交中国专利局、申请号为201711458953.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本文涉及通信领域,例如涉及一种视频会议的传输方法及装置、MCU。
背景技术
相关技术中的视频会议系统是支持声音、视频双向传送的远程通信系统。通过该系统,身处异地的用户能完成实时的近似面对面效果的声音、视频沟通。
从设备层面划分,视频会议系统必须具备视频会议终端和多点控制单元(Multipoint Control Unit,MCU)。终端是用户使用的设备,终端采集用户的声音、视频数据,经由网络发送给远端,同时从网络接收远端的声音、视频数据播放给用户。MCU负责多方会议管理、会议终端声音、视频数据的交换和混合。
相关技术中的视频会议多以IP网络进行视频会议的数据传输,对于实时性高要求的视频会议来说,带宽越高越能传输更多的数据,从而提供更好的服务质量。对于局域网,带宽一般都能按需满足,但是对于互联网或者租借的专线网络,带宽资源非常有限,带宽越高则用户需要付出的使用成本越高。通常视频会议的组网方式为每个视频终端都与MCU建立连接,终端上传自己的音视频数据到MCU,MCU下传会议音视频数据到终端,每一台终端都需要占用上下行对称带宽,比如会议中有100台会议终端,会议带宽为2M,则MCU上下行带宽分别为200M。随着信息化建设,远程教育、企业大会、政府工作会议通过视频会议举行的需求也越来越多。高带宽要求对于非局域网的用户来说需要付出昂贵的代价,如果是常年处于户外移动的用户,如海上、勘探、军队则需要使用卫星或无线网络,基本不可能达到这样高的带宽。
针对相关技术中存在的上述现象,目前尚未发现可避免上述情况的方案。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文实施例提供了一种视频会议的传输方法及装置、MCU。
根据本文的一个实施例,提供了一种视频会议的传输方法,包括:指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU;指示会议终端将所述会议终端的音频数据发送至所述MCU,其中,所述会议终端为所述视频会议中除所述第一终端之外的终端。
根据本文的一个实施例,提供了另一种视频会议的传输方法,包括:接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据;将所述音频数据发送至所有参会终端,以及将所述视频数据发送至所述视频会议中除所述第一终端之外的终端。
根据本文的另一个实施例,提供了一种多点控制单元MCU,包括:第一指示模块,设置为指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU;第二指示模块,设置为指示会议终端将所述会议终端的音频数据发送至所述MCU,其中,所述会议终端为所述视频会议中除所述第一终端之外的终端。
根据本文的另一个实施例,提供了另一种视频会议的传输装置,包括:接收模块,设置为接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据;发送模块,设置为将所述音频数据发送至所有参会终端,以及将所述视频数据发送至所述视频会议中除所述第一终端之外的终端。
根据本文的又一个实施例,还提供了一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本文的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图说明
此处所说明的附图用来提供对本文的进一步理解。在附图中:
图1是本文一实施例的网络构架图;
图2是根据本文一实施例中的视频会议的传输方法的流程图;
图3是根据本文另一实施例中的视频会议的传输方法的流程图;
图4是根据本文一实施例中多点控制单元MCU的结构框图;
图5是根据本文一实施例中视频会议的传输装置的结构框图;
图6为本文一实施例中视频会议系统结构示意图;
图7为本文一实施例中低带宽条件下视频会议召开流程图;
图8是本文一实施例中低带宽视频会议数据流向示意图;
图9是本文一实施例中低带宽视频会议打开一个发送方向的组播媒体通道工作流程图;
图10是本文一实施例中视频会议变更广播源时工作流程图;
图11是本文一实施例中视频会议变更广播源所观看的终端时工作流程图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本文。
需要说明的是,本文的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
实施例1
本申请实施例可以运行于图1所示的网络架构上,如图1所示,图1是本文一实施例的网络构架图,该网络架构包括MCU和至少一个终端。在召开视频会议时,终端之间通过MCU进行音视频的实时交互。
在本实施例中提供了一种运行于上述网络架构的视频会议的传输方法,图2是根据本文实施例的一种视频会议的传输方法的流程图,如图2所示,该流程包括步骤S202和步骤S204。
在步骤S202中,指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU。
在步骤S204中,指示会议终端将会议终端的音频数据发送至MCU,其中,会议终端为视频会议中除第一终端之外的终端。
上述步骤,通过仅发送视频会议的第一终端上的视频数据到组播地址,可以节省视频会议中部分终端发送视频数据所使用的网络资源,避免相关技术中的视频会议占用网络资源过多的现象,降低网络带宽的占用,减少网络拥堵,提高网络资源的利用率。
在一实施例中,上述步骤的执行主体可以为视频会议的管理设备,控制设 备,MCU,服务器等,但不限于此。
在一实施例中,步骤S202和步骤S204的执行顺序是可以互换的,即可以先执行步骤S204,然后再执行S202。
在一实施例中,该方法还包括:控制会议终端拒绝发送上行视频数据。可以通过控制上行带宽为0,不分配上行带宽,或者控制会议终端不发送上行视频数据来实现。
在一实施例中,该方法还包括:将加入视频会议的第二终端的视频数据发送至第一终端。第二终端可以是视频会议中除第一终端外的任一终端,可以通过策略来指定或随机指派。例如,将加入视频会议的第二终端的视频数据发送至第一终端包括:将加入视频会议的第二终端的视频数据发送至MCU,通过MCU发送至第一终端。当然也可以是第二终端将视频数据直接发送至第一终端。
在一实施例中,第一终端的视频数据包括:第一终端采集的视频数据,第一终端接收的第二终端的视频数据。采集的视频数据可以是通过摄像头采集的第一终端本地的视频数据,与第一终端的用户对应;接收的第二终端的视频数据是外来的视频数据,统一通过第一终端来广播到其他终端,可以减少传输带宽。
在一实施例中,在指示会议终端将所述会议终端的音频数据发送至MCU之后,该方法还包括:将MCU接收的音频数据发送至第一终端和会议终端,以及将组播地址接收的视频数据发送给会议终端。
在一实施例中,将MCU接收的音频数据发送至第一终端和会议终端包括:将MCU接收的所有音频数据进行混音处理,将混音处理后的音频数据发送至第一终端和会议终端。
在一实施例中,在指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址之后,方法还包括:确定视频会议中的指定终端,将指定终端的视频数据发送至组播地址,其中,指定终端为视频会议中当前发言的终端。该指定终端在视频会议开始时默认为上述第一终端,在视频会议开始后,根据视频会议的会议情况切换至需要显示画面的用户所对应的终端,如当前发言的终端,主席台对应的终端等。
在一实施例中,在确定视频会议中的指定终端之后,方法还包括:控制第一终端拒绝发送上行视频数据。
在一实施例中,在指示加入视频会议的第一终端的视频数据发送到视频会 议的组播地址,第一终端的音频数据发送到多点控制单元MCU之前,方法还包括:创建视频会议,并配置视频会议的组播地址。
在一实施例中,在指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址之前,方法还包括:设置至少一个第一终端,其中,所述至少一个第一终端对应不同的位置区域,可以应用在分布式的网络中。
在一实施例中,将MCU接收的视频数据通过组播地址发送给会议终端包括:将MCU接收的视频数据进行编码处理得到至少一种格式的视频数据,其中,不同格式的视频数据对应的传输带宽不同;将编码处理后的视频数据通过至少一个组播地址发送给会议终端。上述方法可以应用在多终端的场景中,每个会议终端的接收带宽和观看需求不同,可以通过发送不同分辨率的视频数据来实现。
在本实施例中提供了一种运行于上述网络架构的视频会议的传输方法,图3是根据本文实施例的另一种视频会议的传输方法的流程图,如图3所示,该流程包括步骤S302和步骤S304。
在步骤S302中,接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据。
在步骤S304中,将音频数据发送至所有参会终端,以及将视频数据发送至视频会议中除第一终端之外的终端。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本文的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读内存/随机存取存储器(ROM/RAM)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本文每个实施例所述的方法。
实施例2
在本实施例中还提供了一种视频会议的传输装置,设置为实现上述方法实施例,已经进行过说明的不再赘述。图4是根据本文实施例的多点控制单元MCU的结构框图,如图4所示,该装置包括第一指示模块40和第二指示模块42。
第一指示模块40,设置为指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址,第一终端的音频数据发送到多点控制单元MCU。
第二指示模块42,设置为指示会议终端将会议终端的音频数据发送至MCU,其中,会议终端为视频会议中除第一终端之外的终端。
图5是根据本文实施例的视频会议的传输装置的结构框图,如图5所示,还包括接收模块50和发送模块52。
接收模块50,设置为接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据。
发送模块52,设置为将音频数据发送至所有参会终端,以及将视频数据发送至视频会议中除第一终端之外的终端。
上述实施例中的方法步骤可以通过对应的功能模块实现在本实施例的装置中,在此不再赘述。
需要说明的是,上述每个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述多个模块以任意组合的形式分别位于不同的处理器中。
实施例3
本文一实施例结合场景对本申请的方案做进一步的阐述。
本文一实施例提供一种多方视频会议中实现低带宽条件下大规模组网的方法及装置,从而实现低带宽条件下大规模多方视频会议组网,避免相关视频会议组网中存在的不能在极其有限的带宽资源下不能召开大规模多方视频会议的情况。
本文一实施例所述视频会议的装置包括:视频会议MCU、视频会议业务管理系统、会议终端(其中,会议终端可以包括会议室型硬件终端、个人计算机(Personal Computer,PC)软终端、网页实时通信(Web Real-Time Communication,web RTC)终端和移动设备软终端等),视频会议低带宽控制模块。
本文一实施例采用网络组播、网络单播、流量控制三种技术,从会议创建、会议召开、会议终端接入会议、视频源控制、带宽控制几方面进行控制以实现大规模低带宽组网。
本文一实施例提供的一种实现低带宽条件下大规模多方视频会议组网的方法包括以下八个步骤。
第一步,用户在视频会议业务管理系统创建会议,选择会议终端,并指定会议为低带宽会议模式,此处“低带宽会议”可以有不同的描述方式,核心内 容是节约带宽资源,与常规会议不同。
第二步,配置低带宽会议参数,包括组播地址、主视频组播端口和辅视频组播端口。
第三步,视频会议业务管理系统发送会议信息给MCU,MCU呼叫每个参会终端加入会议,呼叫终端过程中通过信令指示终端对上行视频进行流控。
第四步,视频会议低带宽控制模块将第一个加入会议的终端(对应上述实施例中的第一终端)设置为广播源,指示该广播源终端的视频媒体数据发送到设定的组播地址,音频数据发送到MCU,设置该终端音视频媒体接收源为MCU。
第五步,视频会议低带宽控制模块将第二个加入会议的终端设置为广播源所观看的终端(对应上述实施例中的第二终端),该被广播源所观看终端的音视频数据都发送到MCU,视频数据由MCU转发给广播源终端,视频媒体接收地址为组播地址。
第六步,视频会议低带宽控制模块将广播源终端和被广播源终端观看终端以外的其他终端的视频媒体接收地址设置为设定的组播地址,音频数据发往MCU,控制这些终端的上行视频数据为0,即这些终端不发送本端的视频数据到MCU,所有终端的音频数据在MCU进行混音处理后发送到每台终端。
第七步,当会议中发言终端改变时,视频会议低带宽控制模块将当前发言终端的视频上行带宽恢复为原始带宽,视频接收地址设置为MCU,同时将原广播源终端上行视频带宽控制为0,视频接收地址设置为组播地址。
第八步,当会议中广播源所看的终端改变时,视频会议低带宽控制模块将当前广播源所看终端的视频上行带宽恢复为原始带宽,同时将原被选看终端上行视频带宽控制为0。
以上会议召开的步骤顺序可以有适当的调整,例如对于广播源和广播源所看终端的初始选择可以有不同的规则,但是会议这两路视频源不能缺少。
采用本是实施例所述方法和装置,召开一个会议所需上行带宽为:两路视频带宽(广播源和广播源所看的终端)与所有终端音频带宽总和,下行带宽为:一路视频带宽(广播源)与所有终端音频带宽总和。
如果召开一次具有1000个终端参加的会议,会议带宽2M,采用本实施例所述方法和装置,MCU上行视频带宽约为4M,下行视频带宽约为4M。在相关技术中,则MCU上下行带宽都约为2000M。
与相关技术相比,本申请可以极大降低召开视频会议所需要的带宽资源, 并且随着参会终端数的增加,所需带宽只是增加相应数量的音频带宽,可以很好地缓解带宽资源紧缺时大规模视频会议组网需求。
与相关组播方式会议相比,本实施例可以灵活实现终端之间的互动,每个会议终端可以根据需要被主会场或所有人观看,也可以随时进行语音交流,同时由于对除广播源和广播源所看的会议终端上行带宽使用流量控制技术进行管理,可以极大降低视频会议终端到MCU方向的上行带宽。
会议的音频数据也可以与视频数据一样进行控制,以进一步降低带宽,但是每个会议终端不能随时自行发言,需要会议控制者进行手工控制。本实施例对音频部分处理不作详细说明。
实施例3还包括以下实施实例:
实施示例1
图6为本实施例所述视频会议系统结构示意图,整个系统包括视频会议业务管理系统、视频会议MCU、一定数量的视频会议终端,处于MCU中的视频会议低带宽控制模块为本实施例新增装置。
所述的视频会议业务管理系统为召开视频会议的操作界面,会议召集者用户通过该界面创建和管理会议。
所述的视频会议多点控制单元(Multi-point Control Unit,MCU)是视频会议中的核心设备,主要负责与每个视频会议终端或其他MCU之间的信令和码流处理。
所述的会议电视终端采集到声音和视频图像后,经过视频会议编码算法压缩后,经过IP网络发送给远端MCU或者视频会议终端。远端视频会议终端收到码流后,经过解码后播放给用户。
所述的视频会议低带宽控制模块通过视频会议标准协议控制会议中终端的视频数据带宽和传输方向,以降低MCU的上下行带宽,从而实现低带宽条件下的大规模视频会议。
图7为本实施例所述低带宽条件下视频会议召开流程图。
会议召集者设定会议名、会议带宽、音视频格式等等会议基本信息和需要参加会议的终端列表。
设定与视频会议低带宽控制相关的信息,包括组播地址、主视频组播端口、辅视频组播端口。
会议业务管理系统将会议信息发送到MCU上,由MCU召开会议,MCU 根据会议信息分配MCU媒体和网络处理资源。
MCU创建组播分组,逐个呼叫参加会议的终端接入会议。根据视频会议规则,会议中有两个主要视频源:一个广播源终端,一个广播源终端所观看的终端。广播源终端的视频被会议中其他终端观看,广播源终端观看另外一个终端,定义为广播源所看的终端。会议进行中这两个视频源可以动态切换为别的终端。MCU在呼叫会议中预设的终端加入会议时,第一个成功接入的终端默认为广播源,第二个终端默认为广播源所看的终端,其他终端为普通的会议终端。
在第一个终端接入会议时,MCU设置该终端为会议广播源,视频会议低带宽控制模块通知该终端将视频数据发送到MCU,MCU收到广播源终端的视频后转发到组播地址,供组播组中的终端观看。步骤6,第二个终端接入会议时,MCU设置该终端为会议广播源所看的终端,视频会议低带宽控制模块通知该终端将视频数据发送到MCU,MCU将第二个终端的视频发送到广播源终端,供广播源终端观看。同时视频会议低带宽控制模块将该终端加入到组播组中,该终端接收组播地址转发的视频并播放。
其他终端接入会议时,MCU判断会议中已经有广播源和广播源所看的终端,则视频会议低带宽控制模块直接将它们加入组播地址,并控制其视频上行带宽为0,也就是无需这些终端发送视频数据到MCU,这些终端接受会议组播的视频并播放。
MCU呼叫终端过程中,需要指示组播属性,以H.323协议为例(也可以适用其他的通信协议),组播能力集关键参数包括接收多点能力(receive Multipoint Capability)、传输多点能力(transmit Multipoint Capability)和接收和传输多点能力(receive And Transmit Multipoint Capability)三个,根据终端是否为广播源、广播源所看的端或普通终端的属性进行设置,可以通过以下代码来实现:
Figure PCTCN2018101956-appb-000001
Figure PCTCN2018101956-appb-000002
会议中所有终端的音频数据都发送给MCU,由MCU按需处理。每个终端所接收的音频为MCU为每台终端合成的音频数据。
至此,低带宽下的视频会议召开完成。从上述流程可以看出,整个会议无论有多少终端参加会议,上传的MCU的视频数据只有两路,音频数据为所有终端的音频。MCU下发的视频数据也只有两路,音频数据为所有终端的音频。在占用MCU极低带宽的情况下召开大规模视频会议。
图8是本实施例所述低带宽视频会议数据流向示意图,在视频会议低带宽控制模块的控制下,会议中的广播源和广播源所看的终端视频数据发送到MCU,MCU将广播源终端的视频数据发送到组播地址,将广播源所看的终端视频发送到广播源终端。其他会议终端接收的码流为会议组播数据,不需要MCU发送。会议中的音频数据仍然采用传统的单播方式在MCU和视频会议终端之间传送。
图9是本实施例所述低带宽视频会议打开一个发送方向的组播媒体通道工作流程图。
MCU的多点控制器(Multipoint controllor,MC)层(或协议栈)发起打开逻辑通道的过程。
协议栈根据主从决定的结果,来确定是否由本方来分配组播地址。如果是主地位,向上层请求RTCP地址时,要求上层分配组播地址(实时传输协议(Realtime Transport Protocol,RTP)、实时传输控制协议(Realtime Transport Control Protocol,RTCP)。
MC层转发请求消息。
应用层检查通道参数,如果支持组播并且实时传输协议地址(RTPAddress),实时传输控制协议端口(RTCPPort),实时传输协议端口(RTPPort)全为0,那么是在请求组播地址,给MC层响应RTP和RTCP地址。RTP和RTCP地址放在本方地址信息字段中。否则,对方已经分配了RTP和RTCP组播地址,组播地址在RTPAddress,RTCPPort,RTPPort字段中给出。给MC层响应时,返回这些组播地址。
协议栈发送打开逻辑通道(OpenLogicalChannel)请求,RTP和RTCP组播地址在提出逻辑信道参数(forwardLogicalChannelParameters)字段给出。
终端的协议适配层向MC请求RTP和RTCP地址,由于MCU分配了组播地址,组播地址在RTPAddress,RTCPPort,RTPPort中报告给MC层。
终端的应用层检查参数,如果支持组播并且RTPAddress,RTCPPort,RTPPort全为0,那么是在请求组播地址,给MC层响应RTP和RTCP地址。RTP和RTCP地址放在本方地址信息字段中。否则,对方已经分配了RTP和RTCP组播地址,组播地址在RTPAddress,RTCPPort,RTPPort字段中给出。给MC层响应时,返回这些组播地址。
终端的协议栈向MCU发送打开逻辑通道确认字符(OpenLogicalChannelAck)消息,其RTP和RTCP地址为MCU分配的组播地址。
MCU的协议栈向应用层报告通道打开成功。
MCU的MC层向应用层指示对方的RTP和RTCP组播地址。
MCU的MC层向应用层报告应用层通道已经打开。
图10是本实施例所述视频会议变更广播源时工作流程图,会议进行中发言终端会根据需要进行切换,在发言终端改变时:MCU将新的发言终端置为广播 源;视频会议低带宽控制模块恢复新广播源终端的上行带宽,该终端开始发送视频到MCU;视频会议低带宽控制模块将当前旧广播源的上行带宽控制为0;MCU将旧广播源终端加入会议组播组,与其他终端一样从组播地址接收视频;MCU将新广播源发来的视频数据发送到组播地址。其他终端看到的将是新的广播源视频;完成广播源的切换。
图11是本实施例所述视频会议变更广播源所观看的终端时工作流程图,会议进行中广播源所看的终端也会根据需要进行切换,比如与某个终端进行对话互动,在广播源所看的终端改变时,包括:MCU将新的终端置为新的广播源所看的终端;视频会议低带宽控制模块恢复新的被观看终端的上行带宽,该终端的视频开发发送到MCU;视频会议低带宽控制模块将当前旧广播源所看的终端上行带宽控制为0;MCU将此终端发来的视频数据发送到广播源;完成广播源所看终端的切换。
实施示例2:视频会议低带宽多画面会议
多画面会议是指在会议中MCU将多个会议终端视频合成为一个画面后发送给其他终端观看。在本实施例的基础上,实现多画面会议只需要视频会议低带宽控制模块将需要合成的终端上行带宽恢复即可,这几台视频会议终端的码流上传到MCU进行合成,合成的画面发送到组播地址即可。在多画面会议模式下,网络上行带宽增加量为被合成的终端带宽和,下行带宽不变。
实施示例3:视频会议低带宽分布式组网
对于跨地域召开的大型会议,视频会议终端所处的物理位置总体分散局部集中,例如召开省市两级会议,这种会议的会议终端分布在省级和多个城市。如果按照常规方式召开会议,每个城市的会议终端都会占用省市之间的网络,极大占用省市之间的线路带宽,并且会导致网络堵塞。召开这种视频会议时,采用本实施例对会议组播增加多个组播源,MCU为每个区域下发一路组播数据,该区域的终端从本区域组播源获取会议视频码流即可,省市之间的下行带宽为一路视频带宽和所有终端音频所占带宽。视频会议低带宽控制模块对会议中的所有终端带宽进行控制,MCU处理视频的下行带宽为一路发送给广播源的带宽和所有发送到每个区域的组播发送带宽总和,上行带宽依然是两路:广播源和广播源所看的端。
当视频会议使用多台MCU级联组网时,MCU之间的级联线路上行和下行均占用一路带宽资源。每台MCU上的会议采用与实施示例1相同的技术进行组 网。
实施示例4:视频会议低带宽多能力会议
由于一个视频会议有许多会议终端,终端的性能和带宽不一定都相同,这时则需要召开多能力会议,也就是会议终端以不同的能力加入会议。对于多能力会议,MCU需要为不同能力的会议终端发送与其相同能力的视频,以保证会议终端能正常观看会议。召开多能力低带宽会议时,MCU首先需要将终端按照能力进行分组,比如分辨率1080P的终端为一组、分辨率720P的终端为一组、分辨率影像传输格式(Common Intermediate Format,CIF)的为一组,召开视频会议时,采用本实施例对会议组播增加多个组播源,为每种能力产生一个组播源,MCU对广播源视频分别编码成这几种能力,发送到对应的组播源。视频会议终端也按照各自的能力加入到对应的组播源,获取与自身匹配的视频进行观看。视频会议低带宽控制模块对会议中的所有终端带宽进行控制,MCU处理视频的下行带宽为一路发送给广播源的带宽和所有发送到每个组播源的带宽总和,上行带宽依然是两路:广播源和广播源所看的端。
通过上述实施例可以看到,采用本实施例所述方法和装置能有效降低网络带宽占用,减少网络拥堵,参加会议的终端数量大规模增加也不会明显增加网络开销。
实施例4
本文的实施例还提供了一种存储介质,该存储介质包括存储的程序,其中,上述程序运行时执行上述任一项所述的方法。
在一实施例中,上述存储介质可以被设置为存储设置为执行步骤S1和步骤S2的程序代码。
在步骤S1中,指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址,第一终端的音频数据发送到多点控制单元MCU。
在步骤S2中,指示会议终端将会议终端的音频数据发送至MCU,其中,会议终端为视频会议中除第一终端之外的终端。
在一实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等多种可以存储程序代码的介质。
本文的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中 存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
在一实施例中,上述处理器可以被设置为通过计算机程序执行步骤S1和步骤S2。
在步骤S1中,指示加入视频会议的第一终端的视频数据发送到视频会议的组播地址,第一终端的音频数据发送到多点控制单元MCU。
在步骤S2中,指示会议终端将会议终端的音频数据发送至MCU,其中,会议终端为视频会议中除第一终端之外的终端。
在一实施例中,本实施例中的示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
本文的每个模块或每个步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,在一实施例中,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成多个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本文不限制于任何特定的硬件和软件结合。

Claims (17)

  1. 一种视频会议的传输方法,包括:
    指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU;
    指示会议终端将所述会议终端的音频数据发送至所述MCU,其中,所述会议终端为所述视频会议中除所述第一终端之外的终端。
  2. 根据权利要求1所述的方法,还包括:控制所述会议终端拒绝发送上行视频数据。
  3. 根据权利要求1所述的方法,还包括:
    将加入所述视频会议的第二终端的视频数据发送至所述第一终端。
  4. 根据权利要求3所述的方法,其中,将加入所述视频会议的第二终端的视频数据发送至所述第一终端包括:
    将加入所述视频会议的第二终端的视频数据发送至所述MCU,通过所述MCU发送至所述第一终端。
  5. 根据权利要求3或4所述的方法,其中,所述第一终端的视频数据包括:所述第一终端采集的视频数据,所述第一终端接收的所述第二终端的视频数据。
  6. 根据权利要求1所述的方法,在指示会议终端将所述会议终端的音频数据发送至所述MCU之后,所述方法还包括:
    将所述MCU接收的音频数据发送至所述第一终端和所述会议终端,以及将所述组播地址接收的视频数据发送给所述会议终端。
  7. 根据权利要求6所述的方法,其中,将所述MCU接收的音频数据发送至所述第一终端和所述会议终端包括:
    将所述MCU接收的所有音频数据进行混音处理,将混音处理后的音频数据发送至所述第一终端和所述会议终端。
  8. 根据权利要求1所述的方法,在指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址之后,所述方法还包括:
    确定所述视频会议中的指定终端,将所述指定终端的视频数据发送至所述组播地址,其中,所述指定终端为所述视频会议中当前发言的终端。
  9. 根据权利要求8所述的方法,在确定所述视频会议中的指定终端之后,所述方法还包括:
    控制所述第一终端拒绝发送上行视频数据。
  10. 根据权利要求1所述的方法,在指示加入视频会议的第一终端的视频 数据发送到所述视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU之前,所述方法还包括:
    创建所述视频会议,并配置所述视频会议的所述组播地址。
  11. 根据权利要求1所述的方法,在指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址之前,所述方法还包括:
    设置至少一个第一终端,其中,所述至少一个第一终端对应不同的位置区域。
  12. 根据权利要求6所述的方法,其中,将所述组播地址接收的视频数据发送给所述会议终端包括:
    将所述MCU接收的视频数据进行编码处理得到至少一种格式的视频数据,其中,不同格式的视频数据对应的传输带宽不同;
    将编码处理后的视频数据通过至少一个所述组播地址发送给所述会议终端。
  13. 一种视频会议的传输方法,包括:
    接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据;
    将所述音频数据发送至所有参会终端,以及将所述视频数据发送至所述视频会议中除所述第一终端之外的终端。
  14. 一种多点控制单元MCU,包括:
    第一指示模块,设置为指示加入视频会议的第一终端的视频数据发送到所述视频会议的组播地址,所述第一终端的音频数据发送到多点控制单元MCU;
    第二指示模块,设置为指示会议终端将所述会议终端的音频数据发送至所述MCU,其中,所述会议终端为所述视频会议中除所述第一终端之外的终端。
  15. 一种视频会议的传输装置,包括:
    接收模块,设置为接收视频会议中所有参会终端的音频数据,以及所有参会终端中的第一终端的视频数据;
    发送模块,设置为将所述音频数据发送至所有参会终端,以及将所述视频数据发送至所述视频会议中除所述第一终端之外的终端。
  16. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至13任一项中所述的方法。
  17. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至13任一项中所述的方法。
PCT/CN2018/101956 2017-12-28 2018-08-23 视频会议的传输方法及装置、mcu WO2019128266A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18895218.8A EP3734967A4 (en) 2017-12-28 2018-08-23 VIDEOCONFERENCE TRANSMISSION PROCESS AND APPARATUS, AND MCU
US16/958,780 US20200329083A1 (en) 2017-12-28 2018-08-23 Video conference transmission method and apparatus, and mcu

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711458953.2 2017-12-28
CN201711458953.2A CN108156413B (zh) 2017-12-28 2017-12-28 视频会议的传输方法及装置、mcu

Publications (1)

Publication Number Publication Date
WO2019128266A1 true WO2019128266A1 (zh) 2019-07-04

Family

ID=62462637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/101956 WO2019128266A1 (zh) 2017-12-28 2018-08-23 视频会议的传输方法及装置、mcu

Country Status (4)

Country Link
US (1) US20200329083A1 (zh)
EP (1) EP3734967A4 (zh)
CN (1) CN108156413B (zh)
WO (1) WO2019128266A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710461A (zh) * 2022-03-31 2022-07-05 中煤科工集团重庆智慧城市科技研究院有限公司 多端音视频即时通讯方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156413B (zh) * 2017-12-28 2021-05-11 中兴通讯股份有限公司 视频会议的传输方法及装置、mcu
CN110719434A (zh) * 2019-09-29 2020-01-21 视联动力信息技术股份有限公司 一种视频会议的方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963547A (en) * 1996-09-18 1999-10-05 Videoserver, Inc. Method and apparatus for centralized multipoint conferencing in a packet network
CN1543123A (zh) * 2003-04-28 2004-11-03 基于ip网络的分布式多媒体会议系统
CN1592212A (zh) * 2003-08-28 2005-03-09 北京鼎视通软件技术有限公司 一种基于多点控制单元的终端动态接入方法
CN1849824A (zh) * 2003-10-08 2006-10-18 思科技术公司 用于执行分布式视频会议的系统和方法
CN101404748A (zh) * 2008-10-31 2009-04-08 广东威创视讯科技股份有限公司 用于大规模高清网络视频会议的视频数据传输系统及方法
US8411129B2 (en) * 2009-12-14 2013-04-02 At&T Intellectual Property I, L.P. Video conference system and method using multicast and unicast transmissions
CN108156413A (zh) * 2017-12-28 2018-06-12 中兴通讯股份有限公司 视频会议的传输方法及装置、mcu

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6775247B1 (en) * 1999-03-22 2004-08-10 Siemens Information And Communication Networks, Inc. Reducing multipoint conferencing bandwidth
CN1964475A (zh) * 2006-12-06 2007-05-16 杭州华为三康技术有限公司 视频会议的实现方法、控制设备与用户终端
CN101710959A (zh) * 2009-12-10 2010-05-19 浙江大学 一种应用层组播视频会议系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963547A (en) * 1996-09-18 1999-10-05 Videoserver, Inc. Method and apparatus for centralized multipoint conferencing in a packet network
CN1543123A (zh) * 2003-04-28 2004-11-03 基于ip网络的分布式多媒体会议系统
CN1592212A (zh) * 2003-08-28 2005-03-09 北京鼎视通软件技术有限公司 一种基于多点控制单元的终端动态接入方法
CN1849824A (zh) * 2003-10-08 2006-10-18 思科技术公司 用于执行分布式视频会议的系统和方法
CN101404748A (zh) * 2008-10-31 2009-04-08 广东威创视讯科技股份有限公司 用于大规模高清网络视频会议的视频数据传输系统及方法
US8411129B2 (en) * 2009-12-14 2013-04-02 At&T Intellectual Property I, L.P. Video conference system and method using multicast and unicast transmissions
CN108156413A (zh) * 2017-12-28 2018-06-12 中兴通讯股份有限公司 视频会议的传输方法及装置、mcu

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3734967A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710461A (zh) * 2022-03-31 2022-07-05 中煤科工集团重庆智慧城市科技研究院有限公司 多端音视频即时通讯方法及系统
CN114710461B (zh) * 2022-03-31 2024-03-12 中煤科工集团重庆智慧城市科技研究院有限公司 多端音视频即时通讯方法及系统

Also Published As

Publication number Publication date
EP3734967A1 (en) 2020-11-04
EP3734967A4 (en) 2021-09-08
CN108156413B (zh) 2021-05-11
CN108156413A (zh) 2018-06-12
US20200329083A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
US8830294B2 (en) Method and system for video conference control, videoconferencing network equipment, and videoconferencing site
EP1678951B1 (en) System and method for performing distributed video conferencing
US8659634B2 (en) Method and system for implementing three-party video call by mobile terminals
CN110971863B (zh) 一种多点控制单元跨区会议运行方法、装置、设备及系统
CN110475094B (zh) 视频会议处理方法、装置及可读存储介质
EP2936803B1 (en) Method and a device for optimizing large scaled video conferences
WO2019128266A1 (zh) 视频会议的传输方法及装置、mcu
US9743043B2 (en) Method and system for handling content in videoconferencing
WO2016082577A1 (zh) 视频会议的处理方法及装置
EP2704355B1 (en) Method, device and system for establishing multi-cascade channel
WO2011150868A1 (zh) 会议级联方法及系统
WO2015003532A1 (zh) 多媒体会议的建立方法、装置及系统
KR20140006221A (ko) 회의 처리 장치 선택 방법 및 이를 이용한 화상 회의 시스템
US9013537B2 (en) Method, device, and network systems for controlling multiple auxiliary streams
US20190253666A1 (en) Call Processing Method and Gateway
WO2016206471A1 (zh) 多媒体业务处理方法、系统及装置
CN114598853A (zh) 视频数据的处理方法、装置及网络侧设备
WO2016086371A1 (zh) 一种会议资源调度的方法及装置
WO2023005487A1 (zh) 音视频会议实现方法、音视频会议系统及相关装置
CN117812218A (zh) 基于ims通信单流媒通道下的分屏会议实现方法
CN115734028A (zh) 一种基于级联编码的媒体流推送方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18895218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018895218

Country of ref document: EP

Effective date: 20200728