CN109618120B - Video conference processing method and device - Google Patents

Video conference processing method and device Download PDF

Info

Publication number
CN109618120B
CN109618120B CN201811371892.0A CN201811371892A CN109618120B CN 109618120 B CN109618120 B CN 109618120B CN 201811371892 A CN201811371892 A CN 201811371892A CN 109618120 B CN109618120 B CN 109618120B
Authority
CN
China
Prior art keywords
video
terminal
speech data
server
video network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811371892.0A
Other languages
Chinese (zh)
Other versions
CN109618120A (en
Inventor
王晓燕
袁庆宁
李云鹏
韩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201811371892.0A priority Critical patent/CN109618120B/en
Publication of CN109618120A publication Critical patent/CN109618120A/en
Application granted granted Critical
Publication of CN109618120B publication Critical patent/CN109618120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a video conference processing method and device, which are applied to a video network. The method comprises the following steps: when a video networking video conference is carried out, if a virtual terminal receives first speech data sent by a source internet terminal bound with the virtual terminal, first protocol conversion is carried out on the first speech data to obtain second speech data; sending the second speech data to the target video network terminal through the video network server; if the virtual terminal receives third speech data sent by the source video networking terminal through the video networking server, second protocol conversion is carried out on the third speech data to obtain fourth speech data; and sending the fourth speech data to the target internet terminal bound with the fourth speech data. The invention can simulate the function of the real video network terminal through the virtual terminal, thereby realizing that the internet terminal can also participate in the video network video conference, expanding the communication range of the video network conference and better meeting the user requirements.

Description

Video conference processing method and device
Technical Field
The present invention relates to the field of video networking technologies, and in particular, to a video conference processing method and a video conference processing apparatus.
Background
With the rapid development of network technologies, bidirectional communications such as video conferences and video teaching are widely popularized in the aspects of life, work, learning and the like of users.
Video conferencing refers to a conference in which people at two or more locations have a face-to-face conversation via a communication device and a network. Video conferences can be divided into point-to-point conferences and multipoint conferences according to different numbers of participating places. Individuals in daily life do not require the security of conversation contents, the quality of a conference and the scale of the conference, and can adopt video software such as Tencent QQ to carry out video chat. And the commercial video conference of government organs and enterprise institutions requires conditions such as stable and safe network, reliable conference quality, formal conference environment and the like, and professional video conference equipment is required to be used for establishing a special video conference system.
In the prior art, in the process of a video conference, devices located in the same network can smoothly communicate, and devices located in different networks cannot realize data interaction. For example, in a video conference of the video network, usually only terminals of the video network are allowed to participate, and terminals of the non-video network cannot access, so that the communication range is greatly limited, and the user requirements cannot be better met.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a processing method for a video conference and a corresponding processing apparatus for a video conference, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for processing a video conference, where the method is applied to a video network, the video network includes a streaming media server, a video network server and a plurality of video network terminals, the video network includes a plurality of internet terminals, the streaming media server includes a plurality of virtual terminals, and the virtual terminals are bound with the internet terminals one by one, and the method includes:
when a video networking video conference is carried out, if the virtual terminal receives first speech data sent by a source internet terminal bound with the virtual terminal, first protocol conversion is carried out on the first speech data to obtain second speech data;
the virtual terminal sends the second speech data to a target video networking terminal through the video networking server;
if the virtual terminal receives third speech data sent by a source video networking terminal through the video networking server, second protocol conversion is carried out on the third speech data to obtain fourth speech data;
and the virtual terminal sends the fourth speech data to a target internet terminal bound with the fourth speech data.
Optionally, the step of performing a first protocol conversion on the first speech data to obtain second speech data includes: and the virtual terminal converts the first speech data encapsulated based on the Internet protocol into second speech data encapsulated based on the video networking protocol.
Optionally, the step of performing a second protocol conversion on the third speech data to obtain fourth speech data includes: and the virtual terminal converts the third speech data encapsulated based on the video networking protocol into fourth speech data encapsulated based on the internet protocol.
Optionally, before the step of performing the second protocol conversion on the third speech data to obtain fourth speech data, the method further includes: and if the virtual terminal judges that the third speech data comprises multiple paths of audio data, performing sound mixing processing on the third speech data.
Optionally, the step of sending, by the virtual terminal, the second speech data to the target video network terminal via the video network server includes: and the virtual terminal sends the second speech data to the target video network terminal through the video network server according to a downlink communication link configured for the target video network terminal.
On the other hand, the embodiment of the invention also discloses a processing device for video conference, the device is applied to the video network, the video network comprises a streaming media server, a video network server and a plurality of video network terminals, the video network comprises a plurality of internet terminals, the streaming media server comprises a plurality of virtual terminals, the virtual terminals are bound with the internet terminals one by one, and the virtual terminals comprise:
the first conversion module is used for performing first protocol conversion on first speech data to obtain second speech data if the first speech data sent by a source internet terminal bound with the first speech data is received when a video networking video conference is performed;
the first sending module is used for sending the second speech data to a target video networking terminal through the video networking server;
the second conversion module is used for performing second protocol conversion on third speech data to obtain fourth speech data if the third speech data sent by a source video network terminal through the video network server is received;
and the second sending module is used for sending the fourth speech data to the target internet terminal bound with the fourth speech data.
Optionally, the first conversion module is specifically configured to convert the first utterance data encapsulated based on the internet protocol into the second utterance data encapsulated based on the video networking protocol.
Optionally, the second conversion module is specifically configured to convert the third speech data encapsulated based on the video networking protocol into fourth speech data encapsulated based on the internet protocol.
Optionally, the virtual terminal further includes: and the sound mixing module is used for carrying out sound mixing processing on the third speech data if the third speech data comprises multiple paths of audio data.
Optionally, the first sending module is specifically configured to send, by the video networking server, the second speech data to the target video networking terminal according to a downlink communication link configured for the target video networking terminal.
In the embodiment of the invention, a plurality of virtual terminals are arranged in the streaming media server, and the virtual terminals are bound with the Internet terminals one by one. When a video networking video conference is carried out, if a virtual terminal receives first speech data sent by a source internet terminal bound with the virtual terminal, first protocol conversion is carried out on the first speech data to obtain second speech data; the virtual terminal sends the second speech data to a target video networking terminal through the video networking server; if the virtual terminal receives third speech data sent by a source video networking terminal through the video networking server, second protocol conversion is carried out on the third speech data to obtain fourth speech data; and the virtual terminal sends the fourth speech data to a target internet terminal bound with the fourth speech data. Therefore, in the embodiment of the invention, the function of the real video networking terminal can be simulated through the virtual terminal, so that the data of the internet terminal can be sent to the video networking terminal, the data of the video networking terminal can be sent to the internet terminal, the internet terminal can also participate in the video networking video conference, the communication range of the video networking conference is expanded, and the user requirements can be better met.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
fig. 5 is a flowchart illustrating steps of a method for processing a video conference according to a first embodiment of the present invention;
fig. 6 is a block diagram of a processing apparatus for a video conference according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
1. Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of data packets, and is 64 bytes if the data packet is a variety of protocol packets, and is 32+1024 or 1056 bytes if the data packet is a unicast data packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, the processing scheme of the video conference provided by the embodiment of the invention follows the protocol of the video network, and can realize that an internet terminal can also participate in the video conference of the video network.
Example one
The video conference processing method can be applied to video networking. The video network can comprise a streaming media server, a video network server (can be the node server) and a plurality of video network terminals, and the internet can comprise a plurality of internet terminals. The video network server can be connected with a plurality of video network terminals, and the video network server and the video network terminals can perform bidirectional interaction based on a video network protocol. The video network server is connected with the streaming media server, and the video network server and the streaming media server can perform bidirectional interaction based on a video network protocol. The streaming media server can be connected with a plurality of internet terminals, and the streaming media server and the internet terminals can perform bidirectional interaction based on internet protocols (such as IP protocols).
The internet terminal may include a software terminal such as a palm top installed on a mobile terminal or the like, and may also include other terminals such as a tablet computer or the like. The video networking terminals may include various conference set-top boxes, video telephony set-top boxes, surgical teaching set-top boxes, media synthesizers, and the like.
Each video network terminal connected with the video network server needs to be registered above the video network server in advance to perform normal service. After registration, the video network server allocates information such as a video network number and a Media Access Control (MAC) address to the video network terminal. And the video networking terminal and the video networking server interact based on the video networking number and the video networking MAC address.
The streaming media server is mainly used for realizing the operations of conversion, interaction and the like of data in the internet and data in the video network. The streaming media server can comprise a plurality of virtual terminals, and the streaming media server can interact with the video network server through the virtual terminals. Each virtual terminal can be seen as a simulated video networking terminal. Therefore, each virtual terminal in the streaming server connected to the video network server also needs to register in advance above the video network server to enable normal services. After registration, the video network server distributes information such as a video network number, a video network MAC address and the like for the virtual terminal. And the virtual terminal and the video network server interact based on the video network number and the video network MAC address.
The streaming media server can interact with the internet terminal through the virtual terminal. Each internet terminal connected with the streaming media server can be registered on the streaming media server in advance and obtain a registered account. After the registration, the streaming media server binds a virtual terminal for the internet terminal, that is, binds the registration account of the internet terminal with the virtual terminal. And the subsequent internet terminal can interact with the virtual terminal bound with the internet terminal. Therefore, the plurality of virtual terminals are bound with the plurality of internet terminals one by one.
Referring to fig. 5, a flowchart illustrating steps of a method for processing a video conference according to a first embodiment of the present invention is shown.
The processing method of the video conference of the embodiment of the invention can comprise the following steps:
step 501, when a video networking video conference is performed, if a virtual terminal receives first speech data sent by a source internet terminal bound with the virtual terminal, performing first protocol conversion on the first speech data to obtain second speech data.
When a video conference on the internet needs to be created, the user can perform corresponding operations on the conference control software, such as selecting a plurality of invited terminals, setting the authority of each invited terminal, and the like. The invited terminal may be a video networking terminal or an internet terminal. The conference control software can send the information of the invited terminals to the corresponding server, then the corresponding server generates a conference invitation signaling according to the information, and sends the conference invitation signaling to each invited terminal. And after the invited terminal accepts the invitation, the invited terminal can join the video conference of the video network.
In the embodiment of the invention, the terminals participating in the video networking video conference can comprise video networking terminals and internet terminals. Therefore, when a video conference is performed, the terminal that makes a speech may be a video terminal or an internet terminal. Which will be separately described below.
If the terminal performing speaking is an internet terminal, and the internet terminal is used as a source internet terminal, the source internet terminal collects first speaking data of the user, wherein the first speaking data may include audio data or video data. The source internet terminal may encode the first utterance data and encapsulate the first utterance data based on an internet protocol, which may obtain a first internet protocol data packet. The source internet terminal may send the first internet protocol data packet, that is, the first utterance data encapsulated based on the internet protocol, to the virtual terminal bound to the source internet terminal through the internet.
If the virtual terminal receives the first language data sent by the source internet terminal bound with the virtual terminal, the first language data can be correspondingly processed and then sent to other terminals participating in the video networking video conference.
If other terminals participating in the conference are video networking terminals which are used as target video networking terminals, the target video networking terminals cannot directly process the data sent by the source internet terminals because the source internet terminals process the related data based on the internet protocol and the target video networking terminals process the related data based on the video networking protocol.
Therefore, the virtual terminal performs the first protocol conversion on the first speech data to obtain the second speech data. In practical implementation, the virtual terminal converts the first speech data encapsulated based on the internet protocol into the second speech data encapsulated based on the video networking protocol. Specifically, the virtual terminal decapsulates first utterance data encapsulated based on an internet protocol, that is, the first internet protocol data packet, to obtain unencapsulated first utterance data, and encapsulates the first utterance data based on a video networking protocol to obtain a first video networking protocol data packet, that is, second utterance data encapsulated based on the video networking protocol.
If the other terminals participating in the conference are internet terminals and the internet terminal is a target internet terminal, the virtual terminal does not need to perform protocol conversion on the first utterance data because the source internet terminal and the target internet terminal process related data based on internet protocols. Therefore, the virtual terminal bound with the source internet terminal can forward the first utterance data to the virtual terminal bound with the target internet terminal, and then the virtual terminal bound with the target internet terminal forwards the first utterance data to the target internet terminal. And after receiving the first utterance data packaged based on the internet protocol, namely the first internet protocol data packet, the target internet terminal decapsulates the first internet protocol data packet to obtain the first utterance data which is not packaged, and decodes and plays the first utterance data.
Step 502, the virtual terminal sends the second speech data to the target video network terminal via the video network server.
If the other terminals participating in the conference are video network terminals which are used as target video network terminals, the virtual terminal can send the second speech data to the target video network terminals through the video network server after performing protocol conversion on the first speech data to obtain the second speech data. And after the target video networking terminal receives second speech data packaged based on the video networking protocol, namely the first video networking protocol data packet, the target video networking terminal decapsulates the first video networking protocol data packet to obtain second speech data which is not packaged, and decodes and plays the second speech data.
In specific implementation, the virtual terminal sends the second speech data encapsulated based on the video networking protocol to the video networking server through the video networking, and then the video networking server sends the second speech data to the target video networking terminal through the video networking.
In a preferred embodiment, the video network server may send the second speech data encapsulated based on the video network protocol to the video network terminal according to a downlink communication link configured for the target video network terminal.
In practical applications, the video network is a network with a centralized control function, and includes a master control server and a lower level network device, where the lower level network device includes a terminal, and one of the core concepts of the video network is to configure a table for a downlink communication link of a current service by notifying a switching device by the master control server, and then transmit a data packet based on the configured table.
Namely, the communication method in the video network includes:
and the master control server configures the downlink communication link of the current service.
And transmitting the data packet of the current service sent by the source terminal (virtual terminal) to a target terminal (such as a target video network terminal) according to the downlink communication link.
In the embodiment of the present invention, configuring the downlink communication link of the current service includes: and informing the switching equipment related to the downlink communication link of the current service to allocate the table.
Further, transmitting according to the downlink communication link includes: the configured table is consulted, and the switching equipment transmits the received data packet through the corresponding port.
In particular implementations, the services include unicast communication services and multicast communication services. Namely, whether multicast communication or unicast communication, the core concept of the table matching-table can be adopted to realize communication in the video network.
As mentioned above, the video network includes an access network portion, in which the master server is a node server and the lower-level network devices include an access switch and a terminal.
For the unicast communication service in the access network, the step of configuring the downlink communication link of the current service by the master server may include the following steps:
and a substep S11, the main control server obtains the downlink communication link information of the current service according to the service request protocol packet initiated by the source terminal, wherein the downlink communication link information includes the downlink communication port information of the main control server and the access switch participating in the current service.
In the substep S12, the main control server sets a downlink port to which a packet of the current service is directed in a packet address table inside the main control server according to the downlink communication port information of the main control server; and sending a port configuration command to the corresponding access switch according to the downlink communication port information of the access switch.
In sub-step S13, the access switch sets the downstream port to which the packet of the current service is directed in its internal packet address table according to the port configuration command.
For a multicast communication service (e.g., video conference) in the access network, the step of the master server obtaining downlink information of the current service may include the following sub-steps:
in sub-step S21, the main control server obtains a service request protocol packet initiated by the target terminal and applying for the multicast communication service, where the service request protocol packet includes service type information, service content information, and an access network address of the target terminal.
Wherein, the service content information includes a service number.
And a substep S22, the main control server extracts the access network address of the source terminal in a preset content-address mapping table according to the service number.
In the substep of S23, the main control server obtains the multicast address corresponding to the source terminal and distributes the multicast address to the target terminal; and acquiring the communication link information of the current multicast service according to the service type information and the access network addresses of the source terminal and the target terminal.
Step 503, if the virtual terminal receives third speech data sent by the source video network terminal through the video network server, performing second protocol conversion on the third speech data to obtain fourth speech data.
If the terminal performing speaking is a video network terminal, and the video network terminal is used as a source video network terminal, the source video network terminal collects third speaking data of the user, and the third speaking data may include audio data or video data. The source video networking terminal may encode the third speech data, and encapsulate the third speech data based on the video networking protocol, so as to obtain a second video networking protocol data packet. The source video networking terminal can send the second video networking protocol data packet, namely the third speech data based on the video networking protocol encapsulation, to the video networking server connected with the source video networking terminal through the video networking.
If the video network server receives the third speech data sent by the source video network terminal, the third speech data can be correspondingly processed and then sent to other terminals participating in the video network video conference.
If the other terminals participating in the conference are internet terminals which are target internet terminals, the target internet terminals cannot directly process the data transmitted by the source video networking terminal because the source video networking terminal processes the related data based on the video networking protocol and the target internet terminals process the related data based on the internet protocol.
Therefore, the video network server may first send the third speech data to the virtual terminal bound to the target internet terminal. In a preferred embodiment, the video networking server may send the third speech data encapsulated based on the video networking protocol to the virtual terminal bound to the target internet terminal according to the configured downlink communication link for the virtual terminal bound to the target internet terminal. And the virtual terminal performs second protocol conversion on the third speech data to obtain fourth speech data. In practical implementation, the virtual terminal converts the third speech data encapsulated based on the video networking protocol into the fourth speech data encapsulated based on the internet protocol. Specifically, the virtual terminal decapsulates third speech data encapsulated based on the video networking protocol, that is, the second video networking protocol data packet, to obtain unencapsulated third speech data, and encapsulates the third speech data based on the internet protocol to obtain a second internet protocol data packet, that is, fourth speech data encapsulated based on the internet protocol.
If the other terminals participating in the conference are video networking terminals and the video networking terminal is used as a target video networking terminal, the video networking server does not need to perform protocol conversion on the third speech data because the source video networking terminal and the target video networking terminal process related data based on a video networking protocol. Therefore, the video network server connected with the source video network terminal can forward the third speech data to the target video network terminal. And after the target video networking terminal receives the third speech data packaged based on the video networking protocol, namely the second video networking protocol data packet, the target video networking terminal decapsulates the second video networking protocol data packet to obtain the third speech data which is not packaged, and decodes and plays the third speech data.
And step 504, the virtual terminal sends the fourth speech data to a target internet terminal bound with the fourth speech data.
If the other terminals participating in the conference are internet terminals and the internet terminals serve as target internet terminals, the virtual terminal may send the fourth speech data to the target internet terminal bound with the virtual terminal after performing protocol conversion on the third speech data to obtain the fourth speech data. And after the target internet terminal receives the second speech data which is encapsulated based on the internet protocol, namely the second internet protocol data packet, decapsulating the second internet protocol data packet to obtain fourth speech data which is not encapsulated, and decoding and playing the fourth speech data.
In practical applications, it is considered that there may be users of multiple terminals speaking in a video-networking video conference, and therefore, in this case, the speaking data may be mixed before being transmitted to other terminals participating in the video-networking video conference.
For example, before the video network server sends the relevant speech data, such as the second speech data, to the target video network terminal, if it is detected that the second speech data includes multiple audio data, the multiple audio data in the second speech data are mixed, and then the mixed audio data are sent to the target video network terminal.
For another example, before the virtual terminal sends the related speech data, such as the third speech data, to the target internet terminal, if it is detected that the third speech data includes multiple channels of audio data, the multiple channels of audio data in the third speech data are subjected to audio mixing processing, and after the audio mixing processing, the third speech data is subjected to second protocol conversion to obtain fourth speech data, and the fourth speech data is sent to the target internet terminal.
Mixing (Audio Mixing), often also abbreviated as MIX, is the integration of multiple sources of sound into one Stereo (Stereo) or Mono (Mono) soundtrack. These original sound signals, which may originate from different musical instruments, voices or orchestras, respectively, are recorded from live performances or recording rooms. In the process of sound mixing, the frequency, dynamics, tone quality, positioning, reverberation and sound field of each original sound signal are independently adjusted to optimize each sound track, and then the sound tracks are superposed on a final finished product.
The mixing process may be performed by a synthesizer, a sound effect processor, a mixing base, or other devices, or may be performed by mixing software, such as GoldWave software. For a specific mixing process, a person skilled in the art may perform related processing according to actual experience, and the embodiment of the present invention is not discussed in detail herein.
In the embodiment of the invention, the virtual terminal can simulate the function of the real video network terminal, so that the data of the internet terminal can be sent to the video network terminal and the data of the video network terminal can be sent to the internet terminal, the internet terminal can also participate in the video network video conference, the communication range of the video network conference is expanded, and the user requirements can be better met.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example two
Referring to fig. 6, a block diagram of a processing apparatus for a video conference according to a second embodiment of the present invention is shown. The processing device of the video conference can be applied to the video network. The video network comprises a streaming media server, a video network server and a plurality of video network terminals, the video network comprises a plurality of internet terminals, the streaming media server comprises a plurality of virtual terminals, and the virtual terminals are bound with the internet terminals one by one.
The processing device of the video conference of the embodiment of the invention can comprise the following modules positioned in the virtual terminal:
the virtual terminal includes:
the first conversion module 601 is configured to, when a video networking video conference is performed, perform first protocol conversion on first utterance data to obtain second utterance data if the first utterance data sent by a source internet terminal bound to the first utterance data is received;
a first sending module 602, configured to send the second speech data to a target video networking terminal via the video networking server;
a second conversion module 603, configured to, if third speech data sent by a source video network terminal via the video network server is received, perform second protocol conversion on the third speech data to obtain fourth speech data;
a second sending module 604, configured to send the fourth speech data to a target internet terminal bound to the fourth speech data.
In a preferred embodiment, the first conversion module is specifically configured to convert the first utterance data encapsulated based on an internet protocol into the second utterance data encapsulated based on a video network protocol.
In a preferred embodiment, the second conversion module is specifically configured to convert the third speech data encapsulated based on the video networking protocol into the fourth speech data encapsulated based on the internet protocol.
In a preferred embodiment, the virtual terminal further includes: and the sound mixing module is used for carrying out sound mixing processing on the third speech data if the third speech data comprises multiple paths of audio data.
In a preferred embodiment, the first sending module is specifically configured to send, via the video networking server, the second speech data to the target video networking terminal according to a downlink communication link configured for the target video networking terminal.
In the embodiment of the invention, the virtual terminal can simulate the function of the real video network terminal, so that the data of the internet terminal can be sent to the video network terminal and the data of the video network terminal can be sent to the internet terminal, the internet terminal can also participate in the video network video conference, the communication range of the video network conference is expanded, and the user requirements can be better met.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The present invention provides a method and a device for processing a video conference, which are introduced in detail above, and a specific example is applied in this document to illustrate the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A video conference processing method is characterized in that the method is applied to a video network, the video network comprises a streaming media server, a video network server and a plurality of video network terminals, the internet comprises a plurality of internet terminals, the streaming media server comprises a plurality of virtual terminals, each internet terminal connected with the streaming media server is registered on the streaming media server in advance, after the registration, the streaming media server binds a unique virtual terminal for each internet terminal, each virtual terminal can be regarded as a simulated video network terminal, and each virtual terminal in the streaming media server connected with the video network server needs to be registered above the video network server in advance to perform normal service; after registration, the video network server distributes information such as a video network number, a video network MAC address and the like for the virtual terminal; the virtual terminal and the video networking server interact based on the video networking number and the video networking MAC address; the method comprises the following steps:
when a video networking video conference is carried out, if the virtual terminal receives first speech data sent by a source internet terminal bound with the virtual terminal, the virtual terminal converts the first speech data packaged based on an internet protocol into second speech data packaged based on the video networking protocol;
the virtual terminal sends the second speech data to a target video networking terminal through the video networking server;
if the virtual terminal receives third speech data sent by a source video networking terminal through the video networking server, second protocol conversion is carried out on the third speech data to obtain fourth speech data;
and the virtual terminal sends the fourth speech data to a target internet terminal bound with the fourth speech data.
2. The method of claim 1, wherein the step of performing a second protocol conversion on the third speech data to obtain fourth speech data comprises:
and the virtual terminal converts the third speech data encapsulated based on the video networking protocol into fourth speech data encapsulated based on the internet protocol.
3. The method of claim 1, wherein before the step of performing the second protocol conversion on the third speech data to obtain fourth speech data, the method further comprises:
and if the virtual terminal judges that the third speech data comprises multiple paths of audio data, performing sound mixing processing on the third speech data.
4. The method of claim 1, wherein the step of the virtual terminal sending the second speech data to the target video network terminal via the video network server comprises:
and the virtual terminal sends the second speech data to the target video network terminal through the video network server according to a downlink communication link configured for the target video network terminal.
5. A video conference processing device is applied to a video network, wherein the video network comprises a streaming media server, a video network server and a plurality of video network terminals, the internet comprises a plurality of internet terminals, the streaming media server comprises a plurality of virtual terminals, each internet terminal connected with the streaming media server is registered on the streaming media server in advance, after the registration, the streaming media server binds a unique virtual terminal for each internet terminal, each virtual terminal can be regarded as a simulated video network terminal, and each virtual terminal in the streaming media server connected with the video network server needs to be registered above the video network server in advance to perform normal service; after registration, the video network server distributes information such as a video network number, a video network MAC address and the like for the virtual terminal; the virtual terminal and the video networking server interact based on the video networking number and the video networking MAC address; the virtual terminal includes:
the first conversion module is used for converting first speech data packaged based on an internet protocol into second speech data packaged based on the video networking protocol if the first speech data sent by a source internet terminal bound with the first conversion module is received when the video networking video conference is carried out;
the first sending module is used for sending the second speech data to a target video networking terminal through the video networking server;
the second conversion module is used for performing second protocol conversion on third speech data to obtain fourth speech data if the third speech data sent by a source video network terminal through the video network server is received;
and the second sending module is used for sending the fourth speech data to the target internet terminal bound with the fourth speech data.
6. The apparatus according to claim 5, wherein the second conversion module is specifically configured to convert the third speech data encapsulated according to the video networking protocol into fourth speech data encapsulated according to the internet protocol.
7. The apparatus of claim 5, wherein the virtual terminal further comprises:
and the sound mixing module is used for carrying out sound mixing processing on the third speech data if the third speech data comprises multiple paths of audio data.
8. The apparatus according to claim 5, wherein the first sending module is specifically configured to send the second speech data to the target video network terminal via the video network server according to a downlink communication link configured for the target video network terminal.
CN201811371892.0A 2018-11-15 2018-11-15 Video conference processing method and device Active CN109618120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811371892.0A CN109618120B (en) 2018-11-15 2018-11-15 Video conference processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811371892.0A CN109618120B (en) 2018-11-15 2018-11-15 Video conference processing method and device

Publications (2)

Publication Number Publication Date
CN109618120A CN109618120A (en) 2019-04-12
CN109618120B true CN109618120B (en) 2021-09-21

Family

ID=66003285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811371892.0A Active CN109618120B (en) 2018-11-15 2018-11-15 Video conference processing method and device

Country Status (1)

Country Link
CN (1) CN109618120B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062192A (en) * 2019-04-18 2019-07-26 视联动力信息技术股份有限公司 Data processing method, device and storage medium in video conference
CN110225287A (en) * 2019-04-29 2019-09-10 视联动力信息技术股份有限公司 Audio-frequency processing method and device
CN110460804B (en) * 2019-07-30 2021-01-22 视联动力信息技术股份有限公司 Conference data transmitting method, system, device and computer readable storage medium
CN110913162A (en) * 2019-10-28 2020-03-24 视联动力信息技术股份有限公司 Audio and video stream data processing method and system
CN111030995B (en) * 2019-11-11 2023-05-16 视联动力信息技术股份有限公司 Video information processing method and device based on video networking
CN111131753B (en) * 2019-12-25 2022-09-20 视联动力信息技术股份有限公司 Conference processing method and conference management platform server
CN114189648A (en) * 2021-11-17 2022-03-15 海南乾唐视联信息技术有限公司 Method and device for adding live broadcast source into video conference
CN114189649A (en) * 2021-11-17 2022-03-15 海南乾唐视联信息技术有限公司 Video conference live broadcasting method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121588A (en) * 2016-11-30 2018-06-05 北京视联动力国际信息技术有限公司 It is a kind of access external resource method and its regarding networking access server
CN108234421A (en) * 2016-12-21 2018-06-29 北京视联动力国际信息技术有限公司 It is a kind of to regard networked terminals and the method and system of internet terminal audio data intercommunication
CN108616487A (en) * 2016-12-09 2018-10-02 北京视联动力国际信息技术有限公司 Based on the sound mixing method and device regarding networking
CN108632398A (en) * 2017-07-27 2018-10-09 北京视联动力国际信息技术有限公司 A kind of conference access method and system, association turn server and conference management terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377633B (en) * 2010-08-06 2014-10-08 北京乾唐视联网络科技有限公司 Communication connection method and system of access network device
CN108418778A (en) * 2017-02-09 2018-08-17 北京视联动力国际信息技术有限公司 A kind of internet and method, apparatus and interactive system regarding connected network communication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121588A (en) * 2016-11-30 2018-06-05 北京视联动力国际信息技术有限公司 It is a kind of access external resource method and its regarding networking access server
CN108616487A (en) * 2016-12-09 2018-10-02 北京视联动力国际信息技术有限公司 Based on the sound mixing method and device regarding networking
CN108234421A (en) * 2016-12-21 2018-06-29 北京视联动力国际信息技术有限公司 It is a kind of to regard networked terminals and the method and system of internet terminal audio data intercommunication
CN108632398A (en) * 2017-07-27 2018-10-09 北京视联动力国际信息技术有限公司 A kind of conference access method and system, association turn server and conference management terminal

Also Published As

Publication number Publication date
CN109618120A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109618120B (en) Video conference processing method and device
CN108574688B (en) Method and device for displaying participant information
CN109120946B (en) Method and device for watching live broadcast
CN109302576B (en) Conference processing method and device
CN110620896B (en) Conference establishing method, system and device
CN109068186B (en) Method and device for processing packet loss rate
CN108616487B (en) Audio mixing method and device based on video networking
CN109120879B (en) Video conference processing method and system
CN110460804B (en) Conference data transmitting method, system, device and computer readable storage medium
CN109660816B (en) Information processing method and device
CN110475090B (en) Conference control method and system
CN110022295B (en) Data transmission method and video networking system
CN108809921B (en) Audio processing method, video networking server and video networking terminal
CN110138728B (en) Video data sharing method and device
CN110855926A (en) Video conference processing method and device
CN110191304B (en) Data processing method, device and storage medium
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109743522B (en) Communication method and device based on video networking
CN109194902B (en) Hierarchical conference scheduling method and system
CN109040656B (en) Video conference processing method and system
CN108630215B (en) Echo suppression method and device based on video networking
CN109286775B (en) Multi-person conference control method and system
CN111327868A (en) Method, terminal, server, device and medium for setting conference speaking party role
CN109005378B (en) Video conference processing method and system
CN108881134B (en) Communication method and system based on video conference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant