CN110049275B - Information processing method and device in video conference and storage medium - Google Patents

Information processing method and device in video conference and storage medium Download PDF

Info

Publication number
CN110049275B
CN110049275B CN201910364256.3A CN201910364256A CN110049275B CN 110049275 B CN110049275 B CN 110049275B CN 201910364256 A CN201910364256 A CN 201910364256A CN 110049275 B CN110049275 B CN 110049275B
Authority
CN
China
Prior art keywords
terminal
information
video
data
caption information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910364256.3A
Other languages
Chinese (zh)
Other versions
CN110049275A (en
Inventor
刘艳飞
朱道彦
韩杰
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201910364256.3A priority Critical patent/CN110049275B/en
Publication of CN110049275A publication Critical patent/CN110049275A/en
Application granted granted Critical
Publication of CN110049275B publication Critical patent/CN110049275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides an information processing method, an information processing device and a storage medium in a video conference. The method comprises the following steps: the video network server receives caption information sent by a chairman terminal, the caption information is sent to the chairman terminal by the conference control terminal after a speaker terminal is selected, and the caption information comprises terminal information of the speaker terminal; the video network server forwards the subtitle information to the speaking party terminal; the speaking party terminal acquires the speaking data and sends the caption information to the video network server along with the speaking data; the video network server forwards the caption information to the participant terminal along with the speech data; and the participant terminal analyzes the subtitle information and displays the subtitle information. In the embodiment of the invention, the speaking party terminal sends the caption information in a mode of following the video data, the length of the encapsulated data is longer when the video data is sent, and the caption information can be encapsulated and sent as frame data when the caption information is sent following the video data, so that the length of the encapsulated caption information is increased.

Description

Information processing method and device in video conference and storage medium
Technical Field
The present invention relates to the field of video networking technologies, and in particular, to an information processing method, an information processing apparatus, and a storage medium for a video conference.
Background
With the rapid development of network technologies, bidirectional communications such as video conferences, video teaching, video phones, and the like are widely popularized in the aspects of life, work, learning, and the like of users.
Video conferencing refers to a conference in which people at two or more locations have a face-to-face conversation via a communication device and a network. Video conferences can be divided into point-to-point conferences and multipoint conferences according to different numbers of participating places. Individuals in daily life have no requirements on the security of conversation contents, the quality of a conference and the scale of the conference, and can adopt video software to carry out video chat. And the commercial video conference of government organs and enterprise institutions requires conditions such as stable and safe network, reliable conference quality, formal conference environment and the like, and professional video conference equipment is used to establish a special video conference system.
In a videoconference, information about the speaking party is typically sent to the other participating parties. In the prior art, the information of the speaking party is firstly sent to the server, and then the server forwards the information of the speaking party to other participating parties. However, in this method, the server transmits the information of the originator alone, and the data length that can be transmitted is short, which is very limited.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an information processing method, apparatus, and storage medium in a video conference that overcome or at least partially solve the above problems.
In a first aspect, the embodiment of the invention discloses an information processing method in a video conference, which is applied to a video conference of a video network, wherein the video network comprises a conference control terminal, a video network server and a video network terminal, and the video network terminal comprises a chairman terminal, a speaking party terminal and a participant terminal; the method comprises the following steps:
the video network server receives subtitle information sent by the chairman side terminal; the caption information is sent to the chairman side terminal by the conference control terminal after a speaker side terminal is selected, and the caption information comprises terminal information of the speaker side terminal;
the video network server forwards the subtitle information to the speaking party terminal;
the speaking party terminal acquires speaking data and sends the caption information to the video networking server along with the speaking data;
the video network server forwards the subtitle information to the participant terminal along with the speech data;
and the participant terminal analyzes the subtitle information and displays the subtitle information.
Optionally, the step of sending the subtitle information to the video network server along with the speech data includes: the speaking party terminal encapsulates the subtitle information into a first data packet carrying a first operation code, and encapsulates the speaking data into a second data packet carrying a second operation code; the first operation code represents that the data type is caption information, and the second operation code represents that the data type is speech data; the length of the caption information in the first data packet is 16 words; and the speaking party terminal sends the first data packet and the second data packet to the video networking server together based on a video networking transparent transmission protocol.
Optionally, the step of the participant terminal analyzing the subtitle information includes: the participant terminal respectively extracts a first operation code carried by the first data packet and a second operation code carried by the second data packet; and the participant terminal determines that the first operation code is an operation code representing that the data type is the subtitle information, and analyzes the first data packet to obtain the subtitle information.
Optionally, the step of sending the subtitle information to the video network server along with the speech data includes: and the speaking party terminal sends the caption information to the video networking server along with the speaking data at set time intervals.
Optionally, before the step of obtaining speech data and sending the caption information to the video network server along with the speech data by the speech party terminal, the method further includes: the speaking party terminal analyzes the caption information and compares the terminal information included in the caption information with the terminal information of the speaking party terminal; the steps that the speaking party terminal obtains speaking data and sends the caption information to the video networking server along with the speaking data comprise: and when the comparison result is consistent, the speaking party terminal acquires speaking data and sends the caption information to the video networking server along with the speaking data.
In a second aspect, the embodiment of the invention discloses an information processing device in a video conference, wherein the device is applied to a video conference of a video network, the video network comprises a conference control terminal, a video network server and a video network terminal, and the video network terminal comprises a chairman terminal, a speaking party terminal and a participant terminal;
the video network server comprises:
the receiving module is used for receiving the caption information sent by the chairman side terminal; the caption information is sent to the chairman side terminal by the conference control terminal after a speaker side terminal is selected, and the caption information comprises terminal information of the speaker side terminal;
the first forwarding module is used for forwarding the subtitle information to the speaking party terminal;
the talker terminal includes:
the sending module is used for obtaining speech data and sending the caption information to the video networking server along with the speech data;
the video network server further comprises:
the second forwarding module is used for forwarding the caption information to the participant terminal along with the speech data;
the participant terminal includes:
and the analysis module is used for analyzing the subtitle information and displaying the subtitle information.
Optionally, the sending module includes: the encapsulation unit is used for encapsulating the subtitle information into a first data packet carrying a first operation code and encapsulating the speech data into a second data packet carrying a second operation code; the first operation code represents that the data type is caption information, and the second operation code represents that the data type is speech data; the length of the caption information in the first data packet is 16 words; and the information sending unit is used for sending the first data packet and the second data packet to the video networking server together based on a video networking transparent transmission protocol.
Optionally, the parsing module includes: an extracting unit, configured to extract a first operation code carried by the first data packet and a second operation code carried by the second data packet, respectively; and the information analysis unit is used for analyzing the first data packet to obtain the subtitle information if the first operation code is determined to be the operation code representing the data type as the subtitle information.
Optionally, the sending module is specifically configured to send the subtitle information to the video networking server along with the speech data at set time intervals.
Optionally, the talker terminal further includes: the comparison module is used for analyzing the caption information and comparing the terminal information included in the caption information with the terminal information of the comparison module; the sending module is specifically configured to, when the comparison result of the comparison module is consistent, obtain speech data, and send the subtitle information to the video networking server along with the speech data.
In a third aspect, an embodiment of the present invention discloses an information processing apparatus in a video conference, including: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of information processing in a video conference as described in any one of the above.
In a fourth aspect, an embodiment of the present invention discloses a computer-readable storage medium storing a computer program for causing a processor to execute the information processing method in a video conference as described in any one of the above.
In the embodiment of the invention, the conference control terminal sends the caption information including the terminal information of the speaker terminal to the video network server after selecting the speaker terminal, and the video network server forwards the caption information to the speaker terminal after receiving the caption information sent by the chairman terminal; a speaking party terminal acquires speaking data and sends the caption information to a video network server along with the speaking data; the video network server forwards the subtitle information to the participant terminal along with the speech data; and the participant terminal analyzes the subtitle information and displays the subtitle information. Therefore, in the embodiment of the invention, the caption information is sent by the speaking party terminal in a mode of following the video data, and the data length encapsulated when the video data is sent is longer, so that the caption information can be encapsulated and sent as frame data when the caption information is sent following the video data, the length of the encapsulated caption information is increased, and the problem of limitation on the data length in the prior art is solved.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flowchart illustrating steps of a method for processing information in a video conference, in accordance with an embodiment of the present invention;
fig. 6 is a block diagram of an information processing apparatus in a video conference according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues and may include two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 308 is configured by the CPU module 304, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The information processing method in the video conference can be applied to the video conference of the video network. The video network can comprise a conference terminal, a video network server (which can be the node server) and a video network terminal.
The terminal of the video networking is a terminal for performing services based on the video networking protocol, and the terminal of the video networking may be various Set Top Boxes (STBs) and the like based on the video networking protocol. The video network terminal is registered to the video network server to perform normal service, and the video network server can distribute information such as terminal numbers for the video network terminal after successful registration. And the video network terminal logs in the video network server according to the terminal number so as to be connected with the video network server. In the video network, each video network terminal can be distinguished through the terminal number of the video network terminal.
The conference control terminal can be conference control software installed on a PC (personal computer), a mobile phone, a tablet personal computer and the like, and can control the video conference of the video network through the conference control terminal.
When the video network video conference is established, the video network terminal to be accessed is selected through the conference control terminal, the role of the video network terminal in the video network video conference is set, and the role of the video network terminal in the video network video conference can comprise a chairman terminal, a speaking party terminal and a participant terminal. The conference control terminal can send a conference invitation to the selected video network terminal to invite the video network terminal to join in the video network video conference, and the conference invitation can carry role information. The conference control terminal sends the conference invitation to the video network server, and then the video network server sends the conference invitation to each video network terminal according to the downlink communication link configured for each video network terminal. After the video network terminal accepts the invitation of the conference, the response of accepting the invitation is returned to the video network server, and then the response of accepting the invitation of each video network terminal is returned to the conference control terminal by the video network server, so that the video network terminal is added into the video conference of the video network. The video network server can store information such as terminal numbers, names and the like of all video network terminals in the video network video conference.
Referring to fig. 5, a flowchart illustrating steps of an information processing method in a video conference according to an embodiment of the present invention is shown.
The information processing method in the video conference of the embodiment of the invention can comprise the following steps:
step 501, the video network server receives the caption information sent by the chairman terminal.
In a video conference of a video network, a chairman side terminal is used for uniformly managing the conference and can also speak in the conference, a speaking side terminal is used for speaking in the conference, and a participant side terminal is used for receiving speaking data.
In order to enable the participant terminal to know the information of the speaker terminal more, the conference control terminal can acquire the subtitle information after selecting the speaker terminal and send the subtitle information to the chairman terminal based on the video networking protocol. The caption information may include terminal information of the talker terminal, and the terminal information may include information such as a terminal name and a terminal number.
The selected speaker terminal may be a case where the conference control terminal selects the speaker terminal after the video conference is established, or a case where the conference control terminal selects the speaker terminal when switching the speakers in the video conference.
And after receiving the subtitle information, the chairman side terminal sends the subtitle information to the video networking server based on the video networking protocol.
Step 502, the video network server forwards the subtitle information to the speaking party terminal.
In the embodiment of the invention, a mode that the caption information is sent along with the speech data is adopted, and the speech data is sent by the speech party terminal, so that the video network server can forward the caption information to the speech party terminal after receiving the caption information.
In an alternative embodiment, the video network server forwards the caption information to the speaking party terminal according to a downlink communication link configured for the speaking party terminal.
In practical applications, the video network is a network with a centralized control function, and includes a master control server and a lower level network device, where the lower level network device includes a video network terminal, and one of the core concepts of the video network is that a table is configured by a downlink communication link that is notified to a switching device by the master control server for a current service, and then a packet is transmitted based on the configured table.
Namely, the communication method in the video network includes:
and the master control server configures the downlink communication link of the current service.
And transmitting the data packet of the current service sent by the source terminal to the target terminal according to the downlink communication link.
In the embodiment of the present invention, configuring the downlink communication link of the current service includes: and informing the switching equipment related to the downlink communication link of the current service to allocate the table.
Further, transmitting according to the downlink communication link includes: the configured table is consulted, and the switching equipment transmits the received data packet through the corresponding port.
In particular implementations, the services include unicast communication services and multicast communication services. Namely, whether multicast communication or unicast communication, the core concept of the table matching-table can be adopted to realize communication in the video network.
As mentioned above, the video network includes an access network portion, in which the master server is a node server and the lower-level network devices include an access switch and a terminal.
For the unicast communication service in the access network, the step of configuring the downlink communication link of the current service by the master server may include the following steps:
and a substep S11, the main control server obtains the downlink communication link information of the current service according to the service request protocol packet initiated by the source terminal, wherein the downlink communication link information includes the downlink communication port information of the main control server and the access switch participating in the current service.
In the substep S12, the main control server sets a downlink port to which a packet of the current service is directed in a packet address table inside the main control server according to the downlink communication port information of the main control server; and sending a port configuration command to the corresponding access switch according to the downlink communication port information of the access switch.
In sub-step S13, the access switch sets the downstream port to which the packet of the current service is directed in its internal packet address table according to the port configuration command.
For a multicast communication service (e.g., video conference) in the access network, the step of the master server obtaining downlink information of the current service may include the following sub-steps:
in sub-step S21, the main control server obtains a service request protocol packet initiated by the target terminal and applying for the multicast communication service, where the service request protocol packet includes service type information, service content information, and an access network address of the target terminal.
Wherein, the service content information includes a service number.
And a substep S22, the main control server extracts the access network address of the source terminal in a preset content-address mapping table according to the service number.
In the substep of S23, the main control server obtains the multicast address corresponding to the source terminal and distributes the multicast address to the target terminal; and acquiring the communication link information of the current multicast service according to the service type information and the access network addresses of the source terminal and the target terminal.
Step 503, the speaking party terminal acquires speaking data and sends the caption information to the video networking server along with the speaking data.
The user of the speaking party terminal speaks in the video networking video conference, and the caption information and the speaking data are both sent to the participant terminal, so that the speaking party terminal acquires the speaking data and sends the caption information to the video networking server along with the speaking data.
In an alternative embodiment, the step of transmitting the subtitle information to the server of the video network following the utterance data may include a1 to A3:
a1, the speaking party terminal packages the caption information into a first data packet carrying a first operation code.
And the speaking party terminal encapsulates the caption information into a first data packet, and the first data packet carries a first operation code. The first operation code represents that the data type of the data packet is subtitle information.
In the embodiment of the present invention, the caption information may be transmitted together with the utterance data in the form of frame data. The caption information is encapsulated by using a data packet structure with an internal operation code of 2018, so that the first operation code may be 2018.
The packet structure with the internal opcode 2018 is shown in the table below.
Figure BDA0002047717920000151
The field numbers 15-526 are used to encapsulate the caption information and related parameters, and it can be known that the length of the caption information is 16 words, that is, 512 bits.
A2, the speaking party terminal encapsulates the speaking data into a second data packet carrying a second operation code.
And the speaking party terminal encapsulates the speaking data into a second data packet, and the second data packet carries a second operation code. And the second operation code represents that the data type of the data packet is speech data.
In an implementation, the talk data may comprise audio talk data and video talk data, and thus the second data packet may comprise an audio talk data packet and a video talk data packet. The second operation code carried in the audio speech data packet represents that the data type of the data packet is audio data, and the second operation code carried in the video speech data packet represents that the data type of the data packet is video data.
The talker terminal acquires an audio signal collected by the microphone, encodes the audio signal to obtain audio talk data, and encapsulates the audio talk data using a data packet structure with an internal operation code of 2001, so the second operation code may be 2001.
The speaker terminal acquires a video signal acquired by the camera, encodes the video signal to obtain video speech data, and encapsulates the video speech data by using a data packet structure with an internal operation code of 2002, so that the second operation code can be 2002.
A3, the speaking party terminal sends the first data packet and the second data packet to the video network server based on the video network transparent transmission protocol.
And the speaking party terminal sends the first data packet and the second data packet to the video network server together based on the video network transparent transmission protocol 0x8f 85.
In a specific implementation, the talker terminal may send the caption information by calling a caption sending interface function int32send _2018(uint8 data, uint32length), where data is the caption information.
Considering that if the caption information sent by the chairman terminal is directly sent to the participant terminal through the video networking server, the video networking server only sends the caption information once, and if the caption information is lost in the sending process, the participant terminal cannot receive the caption information.
In an alternative embodiment, the caption information may be transmitted repeatedly along with the utterance data for a plurality of times, since the utterance data is transmitted continuously in a data stream. Therefore, the speaking party terminal can send the caption information to the video network server along with the speaking data at set time intervals. By repeatedly sending the caption information, the problem that the caption information is lost so that the participant terminal cannot receive the caption information can be solved. For the specific value of the setting time, a person skilled in the art may select any suitable value according to actual situations, for example, the setting time may be 30 seconds, 1 minute, and the like, which is not limited in the embodiment of the present invention.
Considering the situation that the subtitle information may be in error, for example, the subtitle information sent by the conference terminal is incomplete, or is partially lost in the transmission process of the subtitle information, etc., the embodiment of the present invention may further add an authentication process to ensure that the subtitle information sent to the participant terminal is correct.
In an optional implementation manner, after receiving the caption information, the speaker terminal parses the caption information, and compares the terminal information included in the caption information with the terminal information of the speaker terminal. And if the comparison result shows that the terminal information included in the caption information is consistent with the terminal information of the terminal, the caption information can be determined to be correct, and the terminal at the speaking party can continue to acquire the speaking data and send the caption information to the video network server along with the speaking data. If the comparison result is that the terminal information included in the caption information is inconsistent with the terminal information of the terminal, it may be determined that the caption information is wrong, and in this case, the speaking party terminal may return a wrong response to the conference control terminal via the video network server and the chairman terminal in sequence, and the conference control terminal performs corresponding processing, such as resending the caption information.
Step 504, the video network server forwards the caption information to the participant terminal along with the speech data.
According to the description in step 503, the speaking party terminal sends the first data packet and the second data packet to the video network server together based on the video network transparent transmission protocol, so that the video network server forwards the first data packet and the second data packet to the participant terminal based on the video network transparent transmission protocol after receiving the first data packet and the second data packet.
In an optional implementation manner, the video network server may obtain terminal information, such as a terminal number, of each participant terminal, and correspondingly forward the caption information to each participant terminal along with the speech data according to a downlink communication link configured for each participant terminal.
And 505, the participant terminal analyzes the subtitle information and displays the subtitle information.
In a specific implementation, the participant terminal may receive the caption information by calling a caption receiving interface function int32rcv _2018_ process (session _ t _ session, char _ data), where data is the caption information.
After receiving the caption information and the speech data, the participant terminal analyzes the caption information and the speech data respectively. In the analysis process, the participant terminal respectively extracts a first operation code carried by the first data packet and a second operation code carried by the second data packet. And the participant terminal determines that the first operation code is an operation code representing that the data type is the caption information, and analyzes the first data packet to obtain the caption information. And the participant terminal analyzes the second data packet to obtain the speech data if the second operation code is determined to be the operation code representing that the data type is the speech data.
And the participant terminal analyzes the caption information and the speech data, outputs the speech data and displays the caption information. For example, audio speech data is played through a speaker, and video speech data and caption information are displayed through a display screen.
In the embodiment of the invention, the speaking party terminal sends the caption information in a mode of following the video data, and the caption information can be packaged and sent as frame data when the caption information is sent following the video data because the data length packaged when the video data is sent is longer, so that the length of the packaged caption information is increased, and the problem of limitation on the data length in the prior art is solved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of an information processing apparatus in a video conference according to an embodiment of the present invention is shown. The information processing device in the video conference can be applied to a video conference of a video network, wherein the video network comprises a conference control terminal, a video network server and a video network terminal, and the video network terminal comprises a chairman terminal, a speaking party terminal and a participant terminal.
The information processing device in the video conference of the embodiment of the invention can comprise the following modules:
the video network server comprises:
a receiving module 601, configured to receive subtitle information sent by the chairman terminal; the caption information is sent to the chairman side terminal by the conference control terminal after a speaker side terminal is selected, and the caption information comprises terminal information of the speaker side terminal;
a first forwarding module 602, configured to forward the subtitle information to the speaker terminal;
the talker terminal includes:
a sending module 603, configured to obtain speech data, and send the subtitle information to the video networking server along with the speech data;
the video network server further comprises:
a second forwarding module 604, configured to forward the subtitle information to the participant terminal along with the speech data;
the participant terminal includes:
and the parsing module 605 is configured to parse the subtitle information and display the subtitle information.
In an optional embodiment, the sending module comprises: the encapsulation unit is used for encapsulating the subtitle information into a first data packet carrying a first operation code and encapsulating the speech data into a second data packet carrying a second operation code; the first operation code represents that the data type is caption information, and the second operation code represents that the data type is speech data; the length of the caption information in the first data packet is 16 words; and the information sending unit is used for sending the first data packet and the second data packet to the video networking server together based on a video networking transparent transmission protocol.
In an alternative embodiment, the parsing module comprises: an extracting unit, configured to extract a first operation code carried by the first data packet and a second operation code carried by the second data packet, respectively; and the information analysis unit is used for analyzing the first data packet to obtain the subtitle information if the first operation code is determined to be the operation code representing the data type as the subtitle information.
In an optional implementation manner, the sending module is specifically configured to send the subtitle information to the video network server along with the speech data at set time intervals.
In an optional embodiment, the talker terminal further includes: the comparison module is used for analyzing the caption information and comparing the terminal information included in the caption information with the terminal information of the comparison module; the sending module is specifically configured to, when the comparison result of the comparison module is consistent, obtain speech data, and send the subtitle information to the video networking server along with the speech data.
In the embodiment of the invention, the speaking party terminal sends the caption information in a mode of following the video data, and the caption information can be packaged and sent as frame data when the caption information is sent following the video data because the data length packaged when the video data is sent is longer, so that the length of the packaged caption information is increased, and the problem of limitation on the data length in the prior art is solved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In the embodiment of the invention, the invention also provides an information processing device in the video conference. The apparatus may include one or more processors and one or more machine-readable media having instructions, such as an application program, stored thereon. When executed by the one or more processors, cause the apparatus to perform the information processing method in the video conference described above.
In an embodiment of the present invention, there is also provided a non-transitory computer-readable storage medium, such as a memory, including instructions executable by a processor of an electronic device to perform the information processing method in a video conference described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The information processing method, the information processing apparatus, and the storage medium in the video conference provided by the present invention are introduced in detail, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. The information processing method in the video conference is characterized in that the method is applied to a video conference of a video network, the video network comprises a conference control terminal, a video network server and a video network terminal, and the video network terminal comprises a chairman terminal, a speaking party terminal and a participant terminal; the method comprises the following steps:
the video network server receives subtitle information sent by the chairman side terminal; the caption information is sent to the chairman side terminal by the conference control terminal after a speaker side terminal is selected, and the caption information comprises terminal information of the speaker side terminal;
the video network server forwards the subtitle information to the speaking party terminal;
the speaking party terminal acquires speaking data and sends the caption information to the video networking server along with the speaking data in a frame data mode at set time intervals;
the video network server forwards the subtitle information to the participant terminal along with the speech data;
and the participant terminal analyzes the subtitle information and displays the subtitle information.
2. The method of claim 1, wherein the step of sending the caption information to the video networking server following the speech data comprises:
the speaking party terminal encapsulates the subtitle information into a first data packet carrying a first operation code, and encapsulates the speaking data into a second data packet carrying a second operation code; the first operation code represents that the data type is caption information, and the second operation code represents that the data type is speech data; the length of the caption information in the first data packet is 16 words;
and the speaking party terminal sends the first data packet and the second data packet to the video networking server together based on a video networking transparent transmission protocol.
3. The method of claim 2, wherein the step of the participant terminal parsing the subtitle information comprises:
the participant terminal respectively extracts a first operation code carried by the first data packet and a second operation code carried by the second data packet;
and the participant terminal determines that the first operation code is an operation code representing that the data type is the subtitle information, and analyzes the first data packet to obtain the subtitle information.
4. The method of claim 1,
before the steps of obtaining speech data and sending the caption information to the video network server along with the speech data, the method further includes:
the speaking party terminal analyzes the caption information and compares the terminal information included in the caption information with the terminal information of the speaking party terminal;
the steps that the speaking party terminal obtains speaking data and sends the caption information to the video networking server along with the speaking data comprise:
and when the comparison result is consistent, the speaking party terminal acquires speaking data and sends the caption information to the video networking server along with the speaking data.
5. An information processing device in a video conference is characterized in that the device is applied to a video conference of a video network, the video network comprises a conference control terminal, a video network server and a video network terminal, and the video network terminal comprises a chairman terminal, a speaking party terminal and a participant terminal;
the video network server comprises:
the receiving module is used for receiving the caption information sent by the chairman side terminal; the caption information is sent to the chairman side terminal by the conference control terminal after a speaker side terminal is selected, and the caption information comprises terminal information of the speaker side terminal;
the first forwarding module is used for forwarding the subtitle information to the speaking party terminal;
the talker terminal includes:
the sending module is used for obtaining speech data and sending the caption information to the video networking server along with the speech data in a frame data mode at set time intervals;
the video network server further comprises:
the second forwarding module is used for forwarding the caption information to the participant terminal along with the speech data;
the participant terminal includes:
and the analysis module is used for analyzing the subtitle information and displaying the subtitle information.
6. The apparatus of claim 5, wherein the sending module comprises:
the encapsulation unit is used for encapsulating the subtitle information into a first data packet carrying a first operation code and encapsulating the speech data into a second data packet carrying a second operation code; the first operation code represents that the data type is caption information, and the second operation code represents that the data type is speech data; the length of the caption information in the first data packet is 16 words;
and the information sending unit is used for sending the first data packet and the second data packet to the video networking server together based on a video networking transparent transmission protocol.
7. The apparatus of claim 6, wherein the parsing module comprises:
an extracting unit, configured to extract a first operation code carried by the first data packet and a second operation code carried by the second data packet, respectively;
and the information analysis unit is used for analyzing the first data packet to obtain the subtitle information if the first operation code is determined to be the operation code representing the data type as the subtitle information.
8. The apparatus of claim 5,
the talker terminal further includes:
the comparison module is used for analyzing the caption information and comparing the terminal information included in the caption information with the terminal information of the comparison module;
the sending module is specifically configured to, when the comparison result of the comparison module is consistent, obtain speech data, and send the subtitle information to the video networking server along with the speech data.
9. An information processing apparatus in a video conference, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of information processing in a video conference of any of claims 1 to 4.
10. A computer-readable storage medium characterized by storing a computer program causing a processor to execute the information processing method in a video conference according to any one of claims 1 to 4.
CN201910364256.3A 2019-04-30 2019-04-30 Information processing method and device in video conference and storage medium Active CN110049275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910364256.3A CN110049275B (en) 2019-04-30 2019-04-30 Information processing method and device in video conference and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910364256.3A CN110049275B (en) 2019-04-30 2019-04-30 Information processing method and device in video conference and storage medium

Publications (2)

Publication Number Publication Date
CN110049275A CN110049275A (en) 2019-07-23
CN110049275B true CN110049275B (en) 2021-05-14

Family

ID=67280592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910364256.3A Active CN110049275B (en) 2019-04-30 2019-04-30 Information processing method and device in video conference and storage medium

Country Status (1)

Country Link
CN (1) CN110049275B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110719432A (en) * 2019-09-11 2020-01-21 视联动力信息技术股份有限公司 Data transmission method and device, electronic equipment and storage medium
CN112035030B (en) * 2020-08-28 2022-03-29 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335885A (en) * 2008-07-30 2008-12-31 中兴通讯股份有限公司 Transmission method of multimedia broadcast subtitle information and transmitting/receiving apparatus
CN105103520A (en) * 2013-04-03 2015-11-25 高通股份有限公司 Rewinding a real-time communication session

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150281643A1 (en) * 2014-03-31 2015-10-01 John Mincone Video reply system and method
CN107911646B (en) * 2016-09-30 2020-09-18 阿里巴巴集团控股有限公司 Method and device for sharing conference and generating conference record
CN107959817B (en) * 2016-10-17 2019-04-26 视联动力信息技术股份有限公司 A kind of caption presentation method and device
CN108234922B (en) * 2016-12-14 2019-03-01 视联动力信息技术股份有限公司 A kind of recorded broadcast method and device
CN108574688B (en) * 2017-09-18 2021-01-01 视联动力信息技术股份有限公司 Method and device for displaying participant information
CN109302576B (en) * 2018-09-05 2020-08-25 视联动力信息技术股份有限公司 Conference processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335885A (en) * 2008-07-30 2008-12-31 中兴通讯股份有限公司 Transmission method of multimedia broadcast subtitle information and transmitting/receiving apparatus
CN105103520A (en) * 2013-04-03 2015-11-25 高通股份有限公司 Rewinding a real-time communication session

Also Published As

Publication number Publication date
CN110049275A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN108574688B (en) Method and device for displaying participant information
CN109120946B (en) Method and device for watching live broadcast
CN109302576B (en) Conference processing method and device
CN109640028B (en) Method and device for carrying out conference combining on multiple video networking terminals and multiple Internet terminals
CN109618120B (en) Video conference processing method and device
CN109068186B (en) Method and device for processing packet loss rate
CN110049271B (en) Video networking conference information display method and device
CN110062191B (en) Multi-party group meeting method and server based on video network
CN110087102B (en) State query method, device and storage medium
CN108616487B (en) Audio mixing method and device based on video networking
CN110191304B (en) Data processing method, device and storage medium
CN110460804B (en) Conference data transmitting method, system, device and computer readable storage medium
CN110572607A (en) Video conference method, system and device and storage medium
CN110049273B (en) Video networking-based conference recording method and transfer server
CN110855926A (en) Video conference processing method and device
CN111131754A (en) Control split screen method and device of conference management system
CN110545395A (en) video networking conference switching method and device
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN111614927A (en) Video session establishment method, device, electronic equipment and storage medium
CN109286775B (en) Multi-person conference control method and system
CN111327868A (en) Method, terminal, server, device and medium for setting conference speaking party role
CN111641800A (en) Method and device for realizing conference
CN110719432A (en) Data transmission method and device, electronic equipment and storage medium
CN110049275B (en) Information processing method and device in video conference and storage medium
CN111182258B (en) Data transmission method and device for network conference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant