CN109768964B - Audio and video display method and device - Google Patents

Audio and video display method and device Download PDF

Info

Publication number
CN109768964B
CN109768964B CN201811526872.6A CN201811526872A CN109768964B CN 109768964 B CN109768964 B CN 109768964B CN 201811526872 A CN201811526872 A CN 201811526872A CN 109768964 B CN109768964 B CN 109768964B
Authority
CN
China
Prior art keywords
audio
video
video stream
stream data
control software
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811526872.6A
Other languages
Chinese (zh)
Other versions
CN109768964A (en
Inventor
董岩
亓娜
袁占涛
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201811526872.6A priority Critical patent/CN109768964B/en
Publication of CN109768964A publication Critical patent/CN109768964A/en
Application granted granted Critical
Publication of CN109768964B publication Critical patent/CN109768964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides an audio and video display method and device. The method comprises the following steps: after the terminal is accessed to appointed video networking conference control software, acquiring the split screen number returned by the appointed video networking conference control software; dividing the display screen of the terminal into regions according to the split screen number to generate at least one sub-region; receiving at least one audio/video stream data acquired by the appointed video networking conference control software from a video networking server; and respectively displaying the audio and video stream data in the corresponding subareas. The invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.

Description

Audio and video display method and device
Technical Field
The invention relates to the technical field of video networking, in particular to an audio and video display method and an audio and video display device.
Background
The video networking is an important milestone for network development, is a higher-level form of the internet, is a real-time network, can realize the real-time transmission of full-network high-definition videos which cannot be realized by the existing internet, pushes a plurality of internet applications to high-definition video, and enables a video networking conference based on the video networking technology to be developed rapidly, wherein the high-definition video is face to face.
In the prior art, the video networking conference can adopt Pamier software to control the conference, but a user cannot see multiple video pictures.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide a monitoring and recording method and a corresponding monitoring and recording apparatus that overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses an audio and video display method, including: after the terminal is accessed to appointed video networking conference control software, acquiring the split screen number returned by the appointed video networking conference control software; dividing the display screen of the terminal into regions according to the split screen number to generate at least one sub-region; receiving at least one audio/video stream data acquired by the appointed video networking conference control software from a video networking server; and respectively displaying the audio and video stream data in the corresponding subareas.
Preferably, before the step of acquiring the split screen number returned by the designated video networking conference control software, the method further comprises: sending a registration message to the video networking server through the designated video networking conference control software; and receiving the video networking terminal number which is distributed for the terminal by the video networking server and returned by the appointed video networking conference control software.
Preferably, each of the audio/video stream data is bound with a corresponding channel number, and the step of displaying each of the audio/video stream data in the corresponding sub-area includes: determining a sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region; and aiming at each audio and video stream data, sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data.
Preferably, before the step of displaying each audio-video stream data on each corresponding subregion, the method further includes: performing audio and video decoding and format conversion processing on the audio and video stream data; the step of displaying each audio/video stream data in the corresponding sub-area includes: and respectively displaying the processed audio and video stream data in the corresponding subareas.
Preferably, after the step of displaying each of the audio-video stream data on the corresponding sub area, the method further includes: receiving at least one next audio and video stream data acquired by the appointed video networking conference control software from a video networking server; and respectively displaying each next audio/video stream data in the corresponding subarea.
In order to solve the above problem, an embodiment of the present invention further discloses an audio/video display device, including: the screen dividing number acquisition module is used for acquiring the screen dividing number returned by the appointed video networking conference control software after the terminal is accessed to the appointed video networking conference control software; the sub-region generation module is used for carrying out region division on the display screen of the terminal according to the split screen number to generate at least one sub-region; the audio and video stream receiving module is used for receiving at least one audio and video stream data acquired by the appointed video networking conference control software from a video networking server; and the audio and video stream display module is used for respectively displaying the audio and video stream data in the corresponding subareas.
Preferably, the method further comprises the following steps: the registration message sending module is used for sending registration messages to the video networking server through the appointed video networking conference control software; and the video networking terminal number receiving module is used for receiving the video networking terminal number which is returned by the appointed video networking conference control software and is distributed to the terminal by the video networking server.
Preferably, each of the audio/video stream data is bound with a corresponding channel number, and the audio/video stream display module includes: the sub-region determining sub-module is used for determining the sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region; and the first audio and video stream display submodule is used for sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data aiming at the audio and video stream data.
Preferably, the method further comprises the following steps: the decoding conversion processing module is used for carrying out audio and video decoding and format conversion processing on the audio and video stream data; the audio and video stream display module comprises: and the second audio and video stream display submodule is used for respectively displaying the processed audio and video stream data in the corresponding subareas.
Preferably, the method further comprises the following steps: the next audio and video stream receiving module is used for receiving at least one next audio and video stream data acquired by the appointed video networking conference control software from the video networking server; and the next audio and video stream display module is used for respectively displaying each next audio and video stream data in the corresponding sub-area.
In the embodiment of the invention, after the terminal is accessed to the appointed video networking conference control software, the split screen number returned by the appointed video networking conference software is obtained, the display screen of the terminal is divided into regions according to the split screen number to generate at least one subregion, at least one audio and video stream data obtained by the appointed video networking conference control software from a video networking server is received, and the audio and video stream data are respectively displayed in the corresponding subregions. The embodiment of the invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of the steps of an audio-visual display method of the present invention;
FIG. 6 is a flow chart of the steps of an audio-visual display method of the present invention;
fig. 7 is a schematic structural diagram of an audio/video display device according to the present invention;
fig. 8 is a schematic structural diagram of an audio/video display device according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, the monitoring recording scheme provided by the embodiment of the invention follows the protocol of the video network, and records the monitoring video acquired by the monitoring equipment in the video network so as to meet the relevant requirements of users.
Example one
Referring to fig. 5, a flowchart illustrating steps of an audio/video display method according to an embodiment of the present invention is shown, where the audio/video display method may be applied to a terminal, and specifically may include the following steps:
step 501: and after the terminal is accessed to the appointed video networking conference control software, acquiring the split screen number returned by the appointed video networking conference control software.
In the embodiment of the present invention, the terminal may be an electronic Device such as a mobile phone, a PAD (Portable Android Device), a personal computer, and the like.
The terminal registers in the video networking server in advance, and obtains the video networking terminal number allocated by the video networking server, so that the terminal can obtain the audio and video streaming data from the video networking server by formulating video networking conference control software (such as Pamier software), and specifically, the detailed description will be given in the following embodiment two, which is not repeated herein.
The split screen number refers to the number of sub-areas for dividing the display screen of the terminal into areas, and if the split screen number is 7, the display screen is divided into 7 sub-areas; when the number of the divided screens is 9, the display screen is divided into 9 sub-regions, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
The number of split screens may be preset by a user, or may be set by a designated video conference control software according to a scene of a video conference, and specifically, may be determined according to an actual situation, which is not limited in the embodiment of the present invention.
After the screen number is set, the screen number can be stored in the designated video networking conference control software, when the terminal accesses the designated video networking conference control software, the designated video networking conference control software sends the screen number to the terminal according to the MAC address of the terminal, and step 502 is executed.
Step 502: and carrying out area division on the display screen of the terminal according to the split screen number to generate at least one sub-area.
After the terminal acquires the split screen number returned by the designated video networking conference control software, the display screen of the terminal can be divided into 5 sub-regions according to the split screen number, for example, when the split screen number is 5; and when the number of the split screens is 8, the display screen is divided into 8 sub-areas and the like.
After the display screen of the terminal is divided into regions, one sub-region may be obtained by dividing, that is, the whole display screen is used as a sub-region, or a plurality of sub-regions, for example, 4 sub-regions, 6 sub-regions, and the like may be obtained by dividing.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the display screen of the terminal is divided into regions according to the number of split screens to generate at least one sub-region, step 503 is performed.
Step 503: and receiving at least one audio and video stream data acquired by the appointed video networking conference control software from a video networking server.
After the display screen of the terminal is divided into areas, the designated video network conference control software can acquire the saved video stream data from the video network server.
In the embodiment of the invention, a soft monitoring function is added to the appointed video networking conference control software, wherein monitoring refers to monitoring the program broadcasting condition and recording the missed broadcasting, delayed broadcasting and the environments before and after the program so as to be checked in the future, and soft monitoring refers to applying the monitoring function to the appointed video networking conference control software, receiving audio and video streaming data in a virtual terminal mode and then displaying pictures in the soft monitoring.
The embodiment of the invention receives the audio and video stream data in a virtual terminal mode, so that an entity terminal does not need to be used for receiving the audio and video stream data from the video network server.
It can be understood that when the Pamier software is used for a video conference, audio and video stream data generated by the conference can be uploaded to the video networking server in real time, and then the Pamier software can acquire the audio and video stream data stored in real time from the video networking server and return the audio and video stream data to the terminal.
Step 504 is performed after receiving at least one audio video stream data retrieved from the video network server by the designated video network conference control software.
Step 504: and respectively displaying the audio and video stream data in the corresponding subareas.
After at least one piece of audio/video stream data is acquired, each piece of audio/video stream data can be displayed in the corresponding sub area.
The scheme of which sub-area is specifically displayed in each audio-video stream data will be described in detail in the following preferred embodiments.
In a preferred embodiment of the present invention, each of the audio/video stream data is bound with a corresponding channel number, and step 504 may include:
substep S1: determining a sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region;
substep S2: and aiming at each audio and video stream data, sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data.
In the embodiment of the invention, when the appointed video networking conference control software acquires the video stream data from the video networking server, the video stream data has the corresponding channel number, and the video stream data received by the video networking server returning to the appointed video networking conference control software is the video stream data with the channel number.
A mapping relationship between a channel number and a sub-area is pre-stored in a terminal, for example, the channel number includes a channel number 1, a channel number 2, and a channel number 3, and the sub-area includes: region 1, region 2 and region 3, where channel number 1 has a mapping relationship with region 1, channel number 2 has a mapping relationship with region 2, and channel number 3 has a mapping relationship with region 3.
The mapping relationship may be stored in a list form at the terminal side, as shown in table 1 below:
table 1:
channel number Sub-area
Channel number 1 Region 1
Channel number 2 Region 2
Channel number 3 Region 3
As can be seen from table 1, channel number 1 has a mapping relationship with area 1, channel number 2 has a mapping relationship with area 2, and channel number 3 has a mapping relationship with area 3.
Of course, the terminal side may also use other manners to store the mapping relationship between the channel number and the sub-area, for example, create a mapping relationship database to store the mapping relationship, and the like, which is not limited in this embodiment of the present invention.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
According to the mapping relation between the channel number and the sub-region, the sub-region corresponding to each audio/video stream data can be determined, and then each audio/video stream data can be displayed in the corresponding sub-region.
Before the audio/video recording data is displayed, the audio/video stream data is decoded and coarse-grained format conversion is performed, and in particular, the following preferred embodiment is described in detail.
In a preferred embodiment of the present invention, before step 504, the method may further include:
step N1: performing audio and video decoding and format conversion processing on the audio and video stream data;
in the step of displaying each audio/video stream data in the corresponding sub area, the step 504 may include:
sub-step M1: and respectively displaying the processed audio and video stream data in the corresponding subareas.
In the embodiment of the invention, after the video stream data acquired from the video networking server by the specified video networking conference control software is acquired, the FFmpeg library can be adopted to decode the video stream data, and the format conversion can be carried out on the audio/video stream data according to the audio/video format which can be allowed to be played by the terminal.
Of course, in a specific implementation, the processing operation is not limited to the above processing operation, and when the audio/video stream data acquired from the video network server is encrypted data, decryption processing and the like may be performed on the audio/video stream data.
After each audio and video stream data is processed, each processed audio and video stream data can be displayed in the corresponding sub area respectively.
According to the audio and video display method provided by the embodiment of the invention, after the terminal is accessed to the appointed video network conference control software, the split screen number returned by the appointed video network conference software is obtained, the display screen of the terminal is divided into the regions according to the split screen number to generate at least one subregion, at least one audio and video stream data obtained by the appointed video network conference control software from a video network server is received, and each audio and video stream data is respectively displayed in the corresponding subregion. The embodiment of the invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.
Example two
Referring to fig. 6, a flowchart illustrating steps of an audio/video display method according to an embodiment of the present invention is shown, where the audio/video display method may be applied to a terminal, and specifically may include the following steps:
step 601: and sending a registration message to the video networking server through the appointed video networking conference control software.
In the embodiment of the present invention, the terminal may be an electronic Device such as a mobile phone, a PAD (Portable Android Device), a personal computer, and the like.
When the terminal needs to join the video conference corresponding to the designated video networking conference control software, it needs to register on the video networking server in advance, specifically, a registration message may be sent to the video networking server through the designated video networking conference control software, and step 602 is executed.
Step 602: and receiving the video networking terminal number which is distributed for the terminal by the video networking server and returned by the appointed video networking conference control software.
After the video network server receives the registration message sent by the terminal, the video network server may assign a video network terminal number to the terminal, where the video network terminal number includes the MAC address of the terminal, so as to perform data interaction with the terminal through the MAC address.
After the video network server allocates the video network terminal number to the terminal, the video network terminal number can be returned to the designated video network conference control software, the designated video network conference control software can store the MAC address of the terminal so as to forward subsequent information, and the allocated video network terminal number is returned to the terminal.
After the terminal is registered in the video network server, the terminal can acquire required information, such as audio and video data, text information and the like, from the video network server.
Step 603: and after the terminal is accessed to the appointed video networking conference control software, acquiring the split screen number returned by the appointed video networking conference control software.
The split screen number refers to the number of sub-areas for dividing the display screen of the terminal into areas, and if the split screen number is 7, the display screen is divided into 7 sub-areas; when the number of the divided screens is 9, the display screen is divided into 9 sub-regions, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
The number of split screens may be preset by a user, or may be set by a designated video conference control software according to a scene of a video conference, and specifically, may be determined according to an actual situation, which is not limited in the embodiment of the present invention.
After the screen division number is set, the screen division number can be stored in the designated video networking conference control software, when the terminal accesses the designated video networking conference control software, the designated video networking conference control software sends the screen division number to the terminal according to the MAC address of the terminal, and step 604 is executed.
Step 604: and carrying out area division on the display screen of the terminal according to the split screen number to generate at least one sub-area.
After the terminal acquires the split screen number returned by the designated video networking conference control software, the display screen of the terminal can be divided into 5 sub-regions according to the split screen number, for example, when the split screen number is 5; and when the number of the split screens is 8, the display screen is divided into 8 sub-areas and the like.
After the display screen of the terminal is divided into regions, one sub-region may be obtained by dividing, that is, the whole display screen is used as a sub-region, or a plurality of sub-regions, for example, 4 sub-regions, 6 sub-regions, and the like may be obtained by dividing.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the display screen of the terminal is divided into regions according to the number of split screens to generate at least one sub-region, step 605 is performed.
Step 605: and receiving at least one audio and video stream data acquired by the appointed video networking conference control software from a video networking server.
After the display screen of the terminal is divided into areas, the designated video network conference control software can acquire the saved video stream data from the video network server.
In the embodiment of the invention, a soft monitoring function is added to the appointed video networking conference control software, wherein monitoring refers to monitoring the program broadcasting condition and recording the missed broadcasting, delayed broadcasting and the environments before and after the program so as to be checked in the future, and soft monitoring refers to applying the monitoring function to the appointed video networking conference control software, receiving audio and video streaming data in a virtual terminal mode and then displaying pictures in the soft monitoring.
The embodiment of the invention receives the audio and video stream data in a virtual terminal mode, so that an entity terminal does not need to be used for receiving the audio and video stream data from the video network server.
It can be understood that when the Pamier software is used for a video conference, audio and video stream data generated by the conference can be uploaded to the video networking server in real time, and then the Pamier software can acquire the audio and video stream data stored in real time from the video networking server and return the audio and video stream data to the terminal.
Step 606 is performed after receiving at least one audio video streaming data retrieved from the video network server by the designated video network conference control software.
Step 606: and respectively displaying the audio and video stream data in the corresponding subareas.
After at least one piece of audio/video stream data is acquired, each piece of audio/video stream data can be displayed in the corresponding sub area.
The scheme of which sub-area is specifically displayed in each audio-video stream data will be described in detail in the following preferred embodiments.
In a preferred embodiment of the present invention, each of the audio/video stream data is bound with a corresponding channel number, and step 606 may include:
substep A1: determining a sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region;
substep A2: and aiming at each audio and video stream data, sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data.
In the embodiment of the invention, when the appointed video networking conference control software acquires the video stream data from the video networking server, the video stream data has the corresponding channel number, and the video stream data received by the video networking server returning to the appointed video networking conference control software is the video stream data with the channel number.
A mapping relationship between a channel number and a sub-area is pre-stored in a terminal, for example, the channel number includes a channel number 1, a channel number 2, and a channel number 3, and the sub-area includes: region 1, region 2 and region 3, where channel number 1 has a mapping relationship with region 1, channel number 2 has a mapping relationship with region 2, and channel number 3 has a mapping relationship with region 3.
The mapping relationship may be stored in a list form at the terminal side, as shown in table 2 below:
table 2:
channel number Sub-area
Channel number 1 Region 1
Channel number 2 Region 2
Channel number 3 Region 3
As can be seen from table 2, channel number 1 has a mapping relationship with area 1, channel number 2 has a mapping relationship with area 2, and channel number 3 has a mapping relationship with area 3.
Of course, the terminal side may also use other manners to store the mapping relationship between the channel number and the sub-area, for example, create a mapping relationship database to store the mapping relationship, and the like, which is not limited in this embodiment of the present invention.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
According to the mapping relation between the channel number and the sub-region, the sub-region corresponding to each audio/video stream data can be determined, and then each audio/video stream data can be displayed in the corresponding sub-region.
Before the audio/video recording data is displayed, the audio/video stream data is decoded and coarse-grained format conversion is performed, and in particular, the following preferred embodiment is described in detail.
In a preferred embodiment of the present invention, before the step 606, the method may further include:
step B1: performing audio and video decoding and format conversion processing on the audio and video stream data;
in the step of displaying each audio/video stream data in the corresponding sub area, the step 504 may include:
substep C1: and respectively displaying the processed audio and video stream data in the corresponding subareas.
In the embodiment of the invention, after the video stream data acquired from the video networking server by the specified video networking conference control software is acquired, the FFmpeg library can be adopted to decode the video stream data, and the format conversion can be carried out on the audio/video stream data according to the audio/video format which can be allowed to be played by the terminal.
Of course, in a specific implementation, the processing operation is not limited to the above processing operation, and when the audio/video stream data acquired from the video network server is encrypted data, decryption processing and the like may be performed on the audio/video stream data.
After each audio and video stream data is processed, each processed audio and video stream data can be displayed in the corresponding sub area respectively.
After each audio/video stream data is displayed in the corresponding sub area, step 607 is executed.
Step 607: and receiving at least one next audio and video stream data acquired by the appointed video networking conference control software from the video networking server.
Step 608: and respectively displaying each next audio/video stream data in the corresponding subarea.
After the audio stream data are respectively displayed in the corresponding sub-areas, the audio and video stream can be circularly received, namely, at least one next audio and video stream data is received and displayed in each sub-area, so that the continuous operation of the video conference can be realized.
It is understood that the execution of steps 607-608 is similar to the execution of steps 605-606, and the embodiment of the present invention will not be described in detail herein.
According to the audio and video display method provided by the embodiment of the invention, after the terminal is accessed to the appointed video network conference control software, the split screen number returned by the appointed video network conference software is obtained, the display screen of the terminal is divided into the regions according to the split screen number to generate at least one subregion, at least one audio and video stream data obtained by the appointed video network conference control software from a video network server is received, and each audio and video stream data is respectively displayed in the corresponding subregion. The embodiment of the invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.
EXAMPLE III
Referring to fig. 7, a schematic structural diagram of an audio/video display device provided in an embodiment of the present invention is shown, where the audio/video display device may be applied to a terminal, and specifically may include:
a split screen number obtaining module 710, configured to obtain, after the terminal accesses a designated video networking conference control software, a split screen number returned by the designated video networking conference control software; a sub-region generating module 720, configured to perform region division on the display screen of the terminal according to the split screen number, and generate at least one sub-region; an audio/video stream receiving module 730, configured to receive at least one piece of audio/video stream data acquired by the specified video networking conference control software from a video networking server; and the audio/video stream display module 740 is configured to display each of the audio/video stream data in the corresponding sub area.
Preferably, each of the audio/video stream data is bound with a corresponding channel number, and the audio/video stream display module 740 includes: the sub-region determining sub-module is used for determining the sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region; and the first audio and video stream display submodule is used for sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data aiming at the audio and video stream data.
Preferably, the apparatus further comprises: the decoding conversion processing module is used for carrying out audio and video decoding and format conversion processing on the audio and video stream data; the audio/video stream display module 740 includes: and the second audio and video stream display submodule is used for respectively displaying the processed audio and video stream data in the corresponding subareas.
The audio and video display device provided by the embodiment of the invention obtains the split screen number returned by the appointed video network conference software after the terminal is accessed to the appointed video network conference control software, divides the display screen of the terminal into regions according to the split screen number to generate at least one subregion, receives at least one audio and video stream data obtained by the appointed video network conference control software from a video network server, and respectively displays each audio and video stream data in the corresponding subregion. The embodiment of the invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.
Example four
Referring to fig. 8, a schematic structural diagram of an audio/video display device provided in an embodiment of the present invention is shown, where the audio/video display device may be applied to a terminal, and specifically may include:
a registration message sending module 810, configured to send a registration message to the video networking server through the designated video networking conference control software; a video networking terminal number receiving module 820, configured to receive a video networking terminal number allocated to the terminal by the video networking server returned by the designated video networking conference control software; a split screen number obtaining module 830, configured to obtain, after the terminal accesses the designated video networking conference control software, a split screen number returned by the designated video networking conference control software; a sub-region generating module 840, configured to perform region division on the display screen of the terminal according to the split screen number, and generate at least one sub-region; an audio/video stream receiving module 850, configured to receive at least one piece of audio/video stream data acquired by the specified video networking conference control software from a video networking server; the audio/video stream display module 860 is configured to display each of the audio/video stream data in the corresponding sub area respectively; a next audio/video stream receiving module 870, configured to receive at least one next audio/video stream data that is obtained by the designated video networking conference control software from a video networking server; a next audio/video stream display module 880, configured to respectively display each of the next audio/video stream data in the corresponding sub-area.
The audio and video display device provided by the embodiment of the invention obtains the split screen number returned by the appointed video network conference software after the terminal is accessed to the appointed video network conference control software, divides the display screen of the terminal into regions according to the split screen number to generate at least one subregion, receives at least one audio and video stream data obtained by the appointed video network conference control software from a video network server, and respectively displays each audio and video stream data in the corresponding subregion. The embodiment of the invention can realize the simultaneous display of multiple paths of video pictures and improve the use experience of users.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing describes in detail a monitoring and recording method and a monitoring and recording apparatus provided by the present invention, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (7)

1. An audio and video display method is applied to a terminal and is characterized by comprising the following steps:
after the terminal is accessed to appointed video networking conference control software, acquiring the split screen number returned by the appointed video networking conference control software;
dividing the display screen of the terminal into regions according to the split screen number to generate at least one sub-region;
receiving at least one audio/video stream data acquired by the appointed video networking conference control software from a video networking server; the appointed video networking conference control software has a soft monitoring function and is used for recording the environments before and after the missed broadcasting, the delayed broadcasting and the programs so as to check;
respectively displaying the audio and video stream data in the corresponding subareas;
after the step of displaying each audio/video stream data in the corresponding sub-area, the method further includes:
circularly receiving at least one next audio and video stream data acquired from a video network server by the appointed video network conference control software;
respectively displaying each next audio/video stream data in the corresponding subarea;
before the step of displaying each audio/video stream data on the corresponding each subregion, the method further includes:
performing audio and video decoding and format conversion processing on the audio and video stream data;
the step of displaying each audio/video stream data in the corresponding sub-area includes:
and respectively displaying the processed audio and video stream data in the corresponding subareas.
2. The method of claim 1, further comprising, prior to the step of obtaining the number of split screens returned by the designated video networking conference control software:
sending a registration message to the video networking server through the designated video networking conference control software;
and receiving the video networking terminal number which is distributed for the terminal by the video networking server and returned by the appointed video networking conference control software.
3. The method according to claim 1, wherein each of the audio/video stream data is bound with a corresponding channel number, and the step of displaying each of the audio/video stream data in the corresponding sub-area includes:
determining a sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region;
and aiming at each audio and video stream data, sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data.
4. An audio and video display device is applied to a terminal and is characterized by comprising:
the screen dividing number acquisition module is used for acquiring the screen dividing number returned by the appointed video networking conference control software after the terminal is accessed to the appointed video networking conference control software;
the sub-region generation module is used for carrying out region division on the display screen of the terminal according to the split screen number to generate at least one sub-region;
the audio and video stream receiving module is used for receiving at least one audio and video stream data acquired by the appointed video networking conference control software from a video networking server; the appointed video networking conference control software has a soft monitoring function and is used for recording the missed broadcasting, delayed broadcasting and the environments before and after the program so as to check;
the audio and video stream display module is used for respectively displaying the audio and video stream data in the corresponding subareas;
the next audio and video stream receiving module is used for circularly receiving at least one next audio and video stream data acquired by the appointed video networking conference control software from the video networking server;
the next audio and video stream display module is used for respectively displaying each next audio and video stream data in the corresponding sub area;
the decoding conversion processing module is used for carrying out audio and video decoding and format conversion processing on the audio and video stream data;
the audio and video stream display module comprises:
and the second audio and video stream display submodule is used for respectively displaying the processed audio and video stream data in the corresponding subareas.
5. The apparatus of claim 4, further comprising:
the registration message sending module is used for sending registration messages to the video networking server through the appointed video networking conference control software;
and the video networking terminal number receiving module is used for receiving the video networking terminal number which is returned by the appointed video networking conference control software and is distributed to the terminal by the video networking server.
6. The apparatus according to claim 4, wherein each of the audio/video stream data is bound with a corresponding channel number, and the audio/video stream display module includes:
the sub-region determining sub-module is used for determining the sub-region corresponding to each audio/video stream data according to the mapping relation between the channel number and the sub-region;
and the first audio and video stream display submodule is used for sequentially displaying the audio and video stream data in the subareas according to the subareas corresponding to the audio and video stream data aiming at the audio and video stream data.
7. The apparatus of claim 4, further comprising:
the next audio and video stream receiving module is used for receiving at least one next audio and video stream data acquired by the appointed video networking conference control software from the video networking server;
and the next audio and video stream display module is used for respectively displaying each next audio and video stream data in the corresponding sub-area.
CN201811526872.6A 2018-12-13 2018-12-13 Audio and video display method and device Active CN109768964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811526872.6A CN109768964B (en) 2018-12-13 2018-12-13 Audio and video display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811526872.6A CN109768964B (en) 2018-12-13 2018-12-13 Audio and video display method and device

Publications (2)

Publication Number Publication Date
CN109768964A CN109768964A (en) 2019-05-17
CN109768964B true CN109768964B (en) 2022-03-29

Family

ID=66451781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811526872.6A Active CN109768964B (en) 2018-12-13 2018-12-13 Audio and video display method and device

Country Status (1)

Country Link
CN (1) CN109768964B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131754A (en) * 2019-12-25 2020-05-08 视联动力信息技术股份有限公司 Control split screen method and device of conference management system
CN113157232A (en) * 2021-04-26 2021-07-23 青岛海信医疗设备股份有限公司 Multi-screen splicing display system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550282A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
CN107333144A (en) * 2016-04-28 2017-11-07 深圳锐取信息技术股份有限公司 Multichannel picture display process and device based on football race broadcast relay system
CN107948578A (en) * 2017-12-28 2018-04-20 深圳华望技术有限公司 The method of adjustment and adjusting apparatus of video conferencing system transmission bandwidth and resolution ratio
CN108966024A (en) * 2017-11-29 2018-12-07 北京视联动力国际信息技术有限公司 A kind of transmission method of audio/video flow, back method, apparatus and system
CN108965781A (en) * 2017-12-12 2018-12-07 北京视联动力国际信息技术有限公司 A kind of transmission method of audio/video flow, device and display system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557495B (en) * 2009-05-18 2011-01-26 上海华平信息技术股份有限公司 Bandwidth control method of video conferencing system
CN102263930B (en) * 2010-05-24 2014-02-26 杭州华三通信技术有限公司 Method and device for broadcasting multiple pictures in video conference
US8872878B2 (en) * 2011-07-20 2014-10-28 Cisco Technology, Inc. Adaptation of video for use with different number of cameras and displays at endpoints
CN103391418B (en) * 2013-01-31 2016-06-22 杭州唐桥通视科技有限公司 The fusion method of video conferencing system Network Based and Broadcast and TV system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550282A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
CN107333144A (en) * 2016-04-28 2017-11-07 深圳锐取信息技术股份有限公司 Multichannel picture display process and device based on football race broadcast relay system
CN108966024A (en) * 2017-11-29 2018-12-07 北京视联动力国际信息技术有限公司 A kind of transmission method of audio/video flow, back method, apparatus and system
CN108965781A (en) * 2017-12-12 2018-12-07 北京视联动力国际信息技术有限公司 A kind of transmission method of audio/video flow, device and display system
CN107948578A (en) * 2017-12-28 2018-04-20 深圳华望技术有限公司 The method of adjustment and adjusting apparatus of video conferencing system transmission bandwidth and resolution ratio

Also Published As

Publication number Publication date
CN109768964A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109120946B (en) Method and device for watching live broadcast
CN110166728B (en) Video networking conference opening method and device
CN108965224B (en) Video-on-demand method and device
CN111193788A (en) Audio and video stream load balancing method and device
CN109167960B (en) Method and system for processing video stream data
CN109660816B (en) Information processing method and device
CN108965226B (en) Data acquisition method and device based on video network
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109743550B (en) Method and device for monitoring data flow regulation
CN111131760B (en) Video recording method and device
CN109743555B (en) Information processing method and system based on video network
CN109743284B (en) Video processing method and system based on video network
CN109005378B (en) Video conference processing method and system
CN111131754A (en) Control split screen method and device of conference management system
CN109768964B (en) Audio and video display method and device
CN110677392B (en) Video data transmission method and device
CN110022500B (en) Packet loss processing method and device
CN111212255A (en) Monitoring resource obtaining method and device and computer readable storage medium
CN108574655B (en) Conference monitoring and broadcasting method and device
CN110557411A (en) video stream processing method and device based on video network
CN110324578B (en) Monitoring video processing method, device and storage medium
CN110139060B (en) Video conference method and device
CN109859824B (en) Pathological image remote display method and device
CN110120937B (en) Resource acquisition method, system, device and computer readable storage medium
CN111654659A (en) Conference control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant