CN111510759B - Video display method, device and readable storage medium - Google Patents

Video display method, device and readable storage medium Download PDF

Info

Publication number
CN111510759B
CN111510759B CN202010188270.5A CN202010188270A CN111510759B CN 111510759 B CN111510759 B CN 111510759B CN 202010188270 A CN202010188270 A CN 202010188270A CN 111510759 B CN111510759 B CN 111510759B
Authority
CN
China
Prior art keywords
video
target data
data packet
packet
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010188270.5A
Other languages
Chinese (zh)
Other versions
CN111510759A (en
Inventor
蔡耀
王海
胡磊
杨春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202010188270.5A priority Critical patent/CN111510759B/en
Publication of CN111510759A publication Critical patent/CN111510759A/en
Application granted granted Critical
Publication of CN111510759B publication Critical patent/CN111510759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Abstract

The embodiment of the invention provides a video display method, a video display device and a readable storage medium. The video display method of the invention receives a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet and a data packet type acquired by a chairman participant terminal, analyzes the media packet to obtain a target data packet with the video type, puts the target data packet at the tail end of a video queue, acquires the target data packet from the head of the video queue through a decoding thread, decodes the target data packet through a decoder calling thread to obtain an original pixel stream, adopts a drawing control to render the original pixel stream, and obtains a rendered video picture, and displays the rendered video picture. Therefore, the mobile terminal can display the video picture of the chairman participant terminal side.

Description

Video display method, device and readable storage medium
Technical Field
The present invention relates to the field of communications, and in particular, to a video display method, apparatus, and readable storage medium.
Background
In the scenes of emergency command, rescue and relief work, comprehensive treatment meeting and the like supported by the video networking, a plurality of video networking terminals can be networked together to conduct meeting, wherein the unmanned aerial vehicle is particularly important as a terminal to conduct meeting. The unmanned aerial vehicle can well solve real-time tracking of field emergency, solves dangerous areas (such as landslide and barrier lake) that people can not arrive temporarily, and transmits real-time pictures of dangerous areas back to the chairman meeting terminal, so that guidance of a command hall and command personnel can conveniently know the field situation, and efficient dispatching command is performed.
In the environment of 4th generation mobile network communication (4G,the 4th Generation mobile communication technology), the existing unmanned aerial vehicle conference mode is: the mobile terminal is connected with the remote control handle of the unmanned aerial vehicle through a data line so as to receive the audio and video stream sent by the remote control handle of the unmanned aerial vehicle, the audio and video stream is packaged through a transmission control protocol (TCP, transmission Control Protocol) and then is transmitted to the streaming media server, and the streaming media server sends the packaged data packet to the chairman participant terminal.
However, the mobile terminal can send the received audio and video data of the unmanned aerial vehicle to the chairman participant terminal at present, but cannot receive the video data collected by the chairman participant terminal, so that the video picture of the chairman participant terminal at the far end cannot be displayed.
Disclosure of Invention
The embodiment of the invention provides a video display method, a video display device and a readable storage medium, which are used for solving the problem that a mobile terminal cannot receive video data acquired by a chairman participant terminal at present, so that a video picture of the chairman participant terminal at a far end cannot be displayed.
In a first aspect of an embodiment of the present invention, a video display method is provided, which is executed in a mobile terminal, and includes:
Receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet;
analyzing the media packet to obtain a target data packet with the video type, and placing the target data packet at the tail of a video queue;
the target data packet is obtained from the head of the video queue through a decoding thread, and a decoder is called through the decoding thread to decode the target data packet, so that an original pixel stream is obtained;
and rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
In a second aspect of an embodiment of the present invention, there is provided a video display apparatus, provided in a mobile terminal, including:
the receiving module is used for receiving a media packet sent by the streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet;
the analysis module is used for analyzing the media packet to obtain a target data packet with the type of video type, and placing the target data packet at the tail end of a video queue;
The decoding module is used for obtaining the target data packet from the head of the video queue through a decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and the display module is used for rendering the original pixel stream by adopting a drawing control, obtaining a rendered video picture and displaying the rendered video picture.
In a third aspect of the embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video display method described above.
In a fourth aspect of the present invention, there is provided a video display apparatus comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the video display method described above when executed by the processor.
Aiming at the prior art, the invention has the following advantages:
according to the video display method provided by the embodiment of the invention, a media packet sent by a streaming media server is received through a first receiving thread, wherein the media packet comprises a data packet and a data packet type acquired by a chairman participant terminal, the media packet is analyzed to obtain a target data packet with the type of video type, the target data packet is put into the tail end of a video queue, the target data packet is acquired from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet to obtain an original pixel stream, the original pixel stream is rendered through a drawing control to obtain a rendered video picture, and the rendered video picture is displayed. Therefore, the mobile terminal can display the video picture of the chairman participant terminal side.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a system architecture diagram based on which a video display method according to an embodiment of the present invention is provided
FIG. 2 is a flowchart illustrating steps of a video display method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of another video display method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video display device according to an embodiment of the present invention;
FIG. 5 is a networking schematic of the present invention for a video networking;
FIG. 6 is a schematic diagram of a hardware architecture of a node server according to the present invention;
Fig. 7 is a schematic diagram of a hardware architecture of an access switch according to the present invention;
fig. 8 is a schematic hardware structure of an ethernet corotating gateway according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a system architecture diagram based on which a video display method according to an embodiment of the present invention is provided. The system comprises an unmanned aerial vehicle device 101, a remote control handle 102, a mobile terminal 103, a streaming media server 104, a video networking terminal 105 and a client 106, wherein the unmanned aerial vehicle device 101 is in remote radio connection with the remote control handle 102, and the mobile terminal 103 is connected with the remote control handle 102 through a universal serial bus (USB, universal Serial Bus). The client 106 is configured to control the conference, and the conference participation signaling sent by the client 106 is forwarded to the streaming media server 104 through the internet of view. Then, the streaming media server 104 sends a meeting instruction to the mobile terminal 103, the mobile terminal 103 can meet after receiving the meeting instruction, the mobile terminal 103 can send the video data sent by the received unmanned aerial vehicle device 101 through the remote control handle to the streaming media server 104 after meeting, the streaming media server sends the video data to the video networking terminal 105 through the video networking, the meeting participants on the video networking terminal 105 side can watch the picture acquired by the unmanned aerial vehicle, the video networking terminal 105 is a chairman meeting terminal, the meeting participants on the video networking terminal 105 side are general command personnel, and the command personnel conduct command according to the received picture acquired by the unmanned aerial vehicle. At present, the mobile terminal 103 can send the received video data of the unmanned aerial vehicle device 101 to the chairman participant terminal, but cannot receive the video data collected by the chairman participant terminal, so that a video picture of the chairman participant terminal at a far end cannot be displayed.
In order to solve the technical problems, an embodiment of the present invention provides a video display method. Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a video display method according to an embodiment of the present invention, where the method may be implemented in a mobile terminal, and the method according to the embodiment includes the following steps:
step 201, receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet.
The first receiving thread of the step is used for receiving a media packet sent by the streaming media server, wherein the media packet comprises a data packet and a type of the data packet collected by the chairman participant terminal. The data packets collected by the chairman participant terminal are video data packets or audio data packets, namely, the data packets in the media packets are video data packets or audio data packets.
And 202, analyzing the media packet to obtain a target data packet with the video type, and placing the target data packet at the tail end of a video queue.
The mobile terminal can analyze the media package and determine the type of the data package collected by the chairman participant terminal in the media package according to the analysis result. The analysis result comprises a data packet and the type of the data packet collected by the chairman participant terminal, if the type of the data packet is determined to be the video type according to the analysis result, the data packet is taken as a target data packet, the target data packet is put at the tail end of a video queue, if the type of the data packet is the audio type, the data packet is not put in the video queue, and the data packet is not the target data packet. For example, when the first destination packet (destination packet 1) is placed at the end of the video queue, destination packet 1 is the first packet in the video queue and is also the last packet in the video queue. When the second target data packet (target data packet 2) is put at the end of the video queue, the target data packet 1 is the first data packet in the video queue, and the target data packet 2 is the second data packet in the video queue, that is, the target data packet 2 is the last data packet in the video queue, so that the target data packets in the video queue can be ensured to be arranged according to the sequence.
And 203, obtaining a target data packet from the head of the video queue through a decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream.
The decoding thread starts to acquire the target data packet from the head of the video queue, so that the decoding of the target data packet received first can be ensured.
And 204, rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
Since the step 203 ensures that the target data packet in the video queue in front of the video queue is decoded to obtain the original pixel stream, the step can render the original pixel stream decoded by the target data packet in front of the video queue to obtain a rendered video picture, and display the rendered video picture, wherein the rendered video picture is the video picture of the chairman participant terminal side. Therefore, the mobile terminal can display video pictures according to the time sequence of the video data packets collected by the chairman participant terminal to a certain extent, namely, the video pictures of the video data packets collected by the chairman participant terminal are displayed on the mobile terminal.
According to the video display method, a media packet sent by a streaming media server is received through a first receiving thread, wherein the media packet comprises a data packet and a data packet type collected by a chairman participant terminal, the media packet is analyzed to obtain a target data packet with the video type, the target data packet is placed at the tail end of a video queue, the target data packet is obtained from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet to obtain an original pixel stream, the original pixel stream is rendered through a drawing control, a rendered video picture is obtained, and the rendered video picture is displayed. Therefore, the mobile terminal can display the video picture of the chairman participant terminal side.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of another video display method according to an embodiment of the present invention. The method comprises the following steps:
step 301, a first receiving thread is created, and the first receiving thread and a second thread created by the mobile terminal and used for receiving signaling are asynchronous threads.
The second thread is used for receiving signaling sent by the streaming media server, for example, the signaling is conference-joining signaling, and the mobile terminal can join the video networking conference after receiving the conference-joining signaling. The first thread and the second thread are asynchronous threads, and the first thread is used for receiving media packets sent by the streaming media server.
And 302, receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet.
And 303, analyzing the media packet to obtain a target data packet with the video type, and placing the target data packet at the tail end of the video queue.
Step 304, detecting the length of the target data packet in the video queue.
Step 305, determining whether the length of the target data packet in the video queue is greater than a preset length.
Step 306, if the length of the target data packets in the video queue is greater than the preset length, discarding the N target data packets in the video queue.
N is equal to the difference value between the number of target data packets in the video queue and the preset length, and the N target data packets comprise N data packets which are sequentially selected from the head of the video queue.
For steps 304 to 306, the mobile terminal may detect the length of the target data packet in the video queue in real time, and determine whether the length of the target data packet in the video queue is greater than a preset length, and discard N target data packets in the video queue if the length of the target data packet in the video queue is greater than the preset length. For example, the preset length is 30, and the length of the target data packet in the video queue is currently detected to be equal to 70 (there are 70 target data packets in the video queue), so that the length of the target data packet in the video queue is greater than the preset length, 40 data packets are sequentially selected from the head of the video queue, and the 40 data packets are discarded.
It should be noted that, under the condition that the processor of the mobile terminal has higher resource occupancy rate, higher memory occupancy rate, or video playing is blocked due to network reasons, more target data packets in the video queue are accumulated. Meanwhile, because fewer audio data packets come from the chairman participant terminal, the video data packets come from the chairman participant terminal are generally hundreds of times of the audio data packets, so that the audio data in the audio data packets can be smoothly played, if a certain amount of target data packets are not discarded, the mobile terminal still obtains the target data packets from the head of the video queue and decodes the target data packets and the like until the mobile terminal has sufficient resources of a processor, and the rendered video picture and the audio data are asynchronous. Therefore, in this embodiment, through steps 304 to 306, the phenomenon that the rendered video frame is not synchronized with the audio data can be avoided.
Step 307, the target data packet is obtained from the head of the video queue through the decoding thread, and the decoder is called by the decoding thread to decode the target data packet, so as to obtain the original pixel stream.
It should be noted that, the target data packet is, for example, a video data packet in h.264 format, and the decoded original pixel stream is an original pixel stream in YUV format. Where "Y" in YUV represents brightness (Luminence or Luma), i.e., gray scale values, and "U" and "V" represent chromaticity (Chroma). The decoder is a hard decoder or a soft decoder. The hard decoder may be a mediacode decoder. The soft decoder may be an FFmpeg decoder, which is a powerful, commonly used video/audio processing open source framework. The encoding and decoding of all the main stream format audios and videos are supported, and splicing and other operations can be performed.
And 308, rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
Wherein the drawing control comprises a SurfaceView control. The surfacview control may render the original pixel stream directly.
In this embodiment, the second thread is configured to receive a meeting instruction sent by the streaming media server, the mobile terminal may join in the video conference after receiving the meeting instruction, and after joining in the video conference, the mobile terminal may send the received video data sent by the unmanned aerial vehicle device through the remote control handle to the streaming media server, where the streaming media server sends the video data to the chairman participant terminal through the video conference, and the participants located at the chairman participant terminal side may view the video picture collected by the unmanned aerial vehicle. And, the media package sent by the streaming media server can be received by the first thread, so that the video picture of the chairman participant terminal is displayed. The mobile terminal can transmit video data acquired by the unmanned aerial vehicle device to the chairman participant terminal, and can display video pictures of the chairman participant terminal.
According to the video display method provided by the embodiment, whether the length of the target data packet in the video queue is larger than the preset length is judged by detecting the length of the target data packet in the video queue, if the length of the target data packet in the video queue is larger than the preset length, N target data packets in the video queue are discarded, the target data packet is acquired from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet, an original pixel stream is obtained, a drawing control is adopted to render the original pixel stream, a rendered video picture is obtained, and the rendered video picture is displayed. Therefore, the video picture of the chairman participant terminal side can be displayed on the mobile terminal, and the video picture and the audio data of the displayed chairman participant terminal on the mobile terminal can be kept synchronous to a certain extent, so that the personnel controlling the unmanned aerial vehicle equipment on the mobile terminal side can also watch the video picture of the chairman participant terminal, and the personnel controlling the unmanned aerial vehicle equipment can be commanded by the commander personnel on the chairman participant terminal side conveniently.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a video display device according to an embodiment of the present invention, where the device 400 is disposed in a mobile terminal, and includes:
a receiving module 410, configured to receive, by using a first receiving thread, a media packet sent by a streaming media server, where the media packet includes a data packet collected by a chairman participant terminal and a type of the data packet;
the parsing module 420 is configured to parse the media packet to obtain a target data packet with a video type, and put the target data packet at the end of a video queue;
the decoding module 430 is configured to obtain the target data packet from the head of the video queue through a decoding thread, and invoke a decoder to decode the target data packet through the decoding thread, so as to obtain an original pixel stream;
the display module 440 is configured to render the original pixel stream using a drawing control, obtain a rendered video frame, and display the rendered video frame.
According to the video display method, a media packet sent by a streaming media server is received through a first receiving thread, wherein the media packet comprises a data packet and a data packet type collected by a chairman participant terminal, the media packet is analyzed to obtain a target data packet with the video type, the target data packet is placed at the tail end of a video queue, the target data packet is obtained from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet to obtain an original pixel stream, the original pixel stream is rendered through a drawing control, a rendered video picture is obtained, and the rendered video picture is displayed. Therefore, the mobile terminal can display the video picture of the chairman participant terminal side.
Optionally, the method further comprises:
the detection module is used for detecting the length of the target data packet in the video queue;
the judging module is used for judging whether the length of the target data packet in the video queue is larger than a preset length;
and the discarding module is used for discarding N target data packets in the video queue if the length of the target data packets in the video queue is greater than the preset length, wherein N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets which are sequentially selected from the head of the video queue.
Optionally, before the receiving, by the first receiving thread, the media packet sent by the streaming media server, the method further includes:
the creating module is used for creating the first receiving thread, and the first receiving thread and a second receiving thread created by the mobile terminal and used for receiving signaling are asynchronous threads.
Optionally, the decoder is a hard decoder or a soft decoder.
Optionally, the drawing control includes a surfacview control, and the original pixel stream is a pixel stream in YUV format.
In addition, the embodiment of the present invention further provides a video display apparatus, where the video display apparatus includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program when executed by the processor implements each process of the video display method embodiment of the foregoing embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the video display method embodiment described above, and can achieve the same technical effects, so that repetition is avoided and no further description is given here. The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), a magnetic disk, an optical disk, or the like.
The embodiment of the invention also provides a computer program which can be stored on a cloud or local storage medium. Which when executed by a computer or processor is adapted to carry out the respective steps of the video display method of an embodiment of the invention and to carry out the respective modules in the video display apparatus according to an embodiment of the invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
As will be readily appreciated by those skilled in the art: any combination of the above embodiments is possible, and thus is an embodiment of the present invention, but the present specification is not limited by the text.
The video display methods provided herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a system constructed with aspects of the present invention will be apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a video display method according to an embodiment of the invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
For a better understanding of embodiments of the present invention, the following description of the video networking is presented to one skilled in the art:
the partial techniques applied by the video networking are as follows:
network technology (Network Technology)
The network technology innovation of the internet of vision improves on the traditional Ethernet (Ethernet) to face the potentially huge video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (Circuit Switching), the technology of video networking employs Packet Switching to meet Streaming requirements. The video networking technology has the flexibility, simplicity and low price of packet switching, and simultaneously has the quality and the safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Exchange technology (Switching Technology)
The video network adopts the two advantages of the asynchronization and the packet switching of the Ethernet, eliminates the Ethernet defect on the premise of full compatibility, has full-network end-to-end seamless connection, and is directly connected with the user terminal to directly bear the IP data packet. The user data does not need any format conversion in the whole network. The video networking is a higher-level form of Ethernet, is a real-time exchange platform, can realize real-time transmission of full-network large-scale high-definition video which cannot be realized by the current Internet, and pushes numerous network video applications to high definition and unification.
Server technology (Server Technology)
The server technology on the video networking and unified video platform is different from the server in the traditional sense, the streaming media transmission is based on connection-oriented basis, the data processing capability is irrelevant to the flow and the communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than hundred times than that of a traditional server.
Accumulator technology (Storage Technology)
The ultra-high-speed storage technology of the unified video platform adopts the most advanced real-time operating system for adapting to the ultra-large capacity and ultra-large flow media content, the program information in the server instruction is mapped to a specific hard disk space, the media content does not pass through the server any more, the media content is instantly and directly delivered to a user terminal, and the waiting time of the user is generally less than 0.2 seconds. The optimized sector distribution greatly reduces the mechanical motion of magnetic head seek of the hard disk, the resource consumption only accounts for 20% of the IP Internet of the same grade, but the concurrent flow which is 3 times greater than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network security technology (Network Security Technology)
The structural design of the video networking thoroughly structurally solves the network security problem puzzling the Internet by means of independent permission of each service, complete isolation of equipment and user data and the like, generally does not need antivirus programs or firewalls, eliminates attacks of hackers and viruses, and provides a structural carefree security network for users.
Service innovation technology (Service Innovation Technology)
The unified video platform fuses services with transmissions, whether a single user, private network users or a network aggregate, but automatically connects at a time. The user terminal, the set top box or the PC is directly connected to the unified video platform, so that various multimedia video services are obtained. The unified video platform adopts a menu type table allocation mode to replace the traditional complex application programming, and can realize complex application by using very few codes, thereby realizing 'infinite' new business innovation.
Networking of the video networking is as follows:
the video networking is a centrally controlled network structure, which may be of the tree network, star network, ring network, etc., but on the basis of this there is a need for a centralized control node in the network to control the whole network.
As shown in fig. 5, the view network is divided into an access network and a metropolitan area network.
The devices of the access network part can be mainly divided into 3 classes: node server, access switch, terminal (including various set-top boxes, code boards, memories, etc.). The node server is connected with an access switch, which can be connected with a plurality of terminals and can be connected with an Ethernet.
The node server is a node with a centralized control function in the access network, and can control the access switch and the terminal. The node server may be directly connected to the access switch or may be directly connected to the terminal.
Similarly, devices of the metropolitan area network portion can also be classified into 3 categories: metropolitan area server, node switch, node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is the node server of the access network part, namely the node server belongs to the access network part and also belongs to the metropolitan area network part.
The metropolitan area server is a node with centralized control function in the metropolitan area network, and can control a node switch and a node server. The metropolitan area server may be directly connected to the node switch or directly connected to the node server.
Thus, the whole video network is a hierarchical centralized control network structure, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star, ring and the like.
The access network part can be vividly called as a unified video platform (part in a dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform can be interconnected and intercommunicated through metropolitan area and wide area video networking.
View networking device classification
1.1 devices in the visual network according to the embodiment of the present invention may be mainly classified into 3 types: a server, a switch (including an ethernet gateway), a terminal (including various set-top boxes, a code board, a memory, etc.). The view networking can be divided into metropolitan area networks (or national networks, global networks, etc.) and access networks as a whole.
1.2 devices in the access network part can be mainly classified into 3 classes: node server, access switch (including Ethernet gateway), terminal (including various set-top boxes, code board, memory, etc.).
The specific hardware structure of each access network device is as follows:
the node server:
as shown in fig. 6, the device mainly comprises a network interface module 601, a switching engine module 602, a CPU module 603 and a disk array module 604;
Wherein, the packets coming in from the network interface module 601, the cpu module 603 and the disk array module 604 all enter the switching engine module 602; the switching engine module 602 performs an operation of looking up an address table 605 on an incoming packet, thereby obtaining packet guiding information; and stores the packet into a queue of a corresponding packet buffer 606 according to the packet's guiding information; discarding if the queue of packet buffer 606 is nearly full; the switch engine module 602 polls all packet buffer queues for forwarding if the following conditions are met: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero. The disk array module 604 mainly controls the hard disk, including initializing, reading and writing operations on the hard disk; the CPU module 603 is mainly responsible for protocol processing with access switches and terminals (not shown), configuration of an address table 605 (including a downstream protocol packet address table, an upstream protocol packet address table, and a data packet address table), and configuration of the disk array module 604.
Access switch:
as shown in fig. 7, mainly includes a network interface module (a downstream network interface module 701, an upstream network interface module 702), a switching engine module 703, and a CPU module 704;
wherein, the packet (uplink data) coming in from the downlink network interface module 701 enters the packet detection module 705; the packet detection module 705 detects whether the Destination Address (DA), source Address (SA), packet type, and packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id), and enters the switching engine module 703, otherwise, discards the packet; the packet (downlink data) coming in from the uplink network interface module 702 enters the switching engine module 703; the data packet coming in by the CPU module 704 enters the switching engine module 703; the switching engine module 703 performs an operation of looking up an address table 706 on an incoming packet, thereby obtaining packet guiding information; if a packet entering the switching engine module 703 is destined for the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 707 in conjunction with a stream-id; discarding if the queue of the packet buffer 707 is nearly full; if the packet entering the switching engine module 703 is not sent from the downlink network interface to the uplink network interface, storing the data packet in the queue of the corresponding packet buffer 707 according to the packet guiding information; if the queue of the packet buffer 707 is nearly full, it is discarded.
The switching engine module 703 polls all packet buffer queues, in two cases in the embodiment of the present invention:
if the queue is sent from the downlink network interface to the uplink network interface, the following conditions are satisfied for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero; 3) Obtaining a token generated by a video display module;
if the queue is not addressed by the downstream network interface to the upstream network interface, the following condition is satisfied for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero.
The rate control module 708 is configured by the CPU module 704 to generate tokens for all packet buffer queues from the downstream network interface to the upstream network interface at programmable intervals for controlling the rate of upstream forwarding.
The CPU module 704 is mainly responsible for protocol processing with the node server, configuration of the address table 706, and configuration of the rate control module 708.
Ethernet corotation gateway:
as shown in fig. 8, the network interface module (downlink network interface module 801, uplink network interface module 802), switching engine module 803, CPU module 804, packet detection module 805, video display module 808, address table 806, packet buffer 807, MAC addition module 809, and MAC deletion module 810 are mainly included.
Wherein, the data packet coming in from the downlink network interface module 801 enters the packet detection module 805; the packet detection module 805 detects whether the ethernet MAC DA, ethernet MAC SA, ethernet lengthor frame type, video network destination address DA, video network source address SA, video network packet type and packet length of the data packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC DA, MAC SA, length or frame type (2 byte) are subtracted by the MAC delete module 810 and enter the corresponding receive buffer, otherwise discarded;
the downlink network interface module 801 detects the sending buffer of the port, if there is a packet, acquires the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet's internet of view, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet co-ordination gateway, and the ethernet length or frame type, and sends.
The function of the other modules in the ethernet corotation gateway is similar to that of the access switch.
And (3) a terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set top box mainly comprises a network interface module, a video/audio encoding/decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 classes: node server, node switch, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. View networking data packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), source Address (SA), reserved bytes, payload (PDU), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), 256 possibilities are at most provided, the second byte to the sixth byte are metropolitan area network addresses, and the seventh and eighth bytes are access network addresses;
the Source Address (SA) is also composed of 8 bytes (bytes), defined identically to the Destination Address (DA);
the reserved bytes consist of 2 bytes;
the payload portion has different lengths according to the types of the different datagrams, and is 64 bytes if it is various protocol packets, and 32+1024=1056 bytes if it is a unicast packet, and is of course not limited to the above 2 types;
The CRC consists of 4 bytes and its calculation method follows the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of the metropolitan area network is a pattern, there may be 2 or even more than 2 connections between two devices, i.e. there may be more than 2 connections between node switches and node servers, node switches and node switches, node switches and node servers. However, the metropolitan area network address of the metropolitan area network device is unique, and in order to accurately describe the connection relationship between metropolitan area network devices, parameters are introduced in the embodiment of the present invention: a tag to uniquely describe a metropolitan area network device.
The definition of label in this specification is similar to that of MPLS (Multi-Protocol Label Switch, multiprotocol label switching), and assuming that there are two connections between device a and device B, there are 2 labels for packets from device a to device B and 2 labels for packets from device B to device a. The label is split into label and label out, and assuming that the label (in label) of the packet entering the device a is 0x0000, the label (out label) of the packet when leaving the device a may become 0x0001. The network access process of the metropolitan area network is a network access process under centralized control, that is, the address allocation and label allocation of the metropolitan area network are all led by the metropolitan area server, the node switch and the node server are all passively executed, which is different from the label allocation of the MPLS, which is the result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metropolitan area network mainly includes the following parts:
DA SA Reserved tag Payload CRC
I.e. Destination Address (DA), source Address (SA), reserved bytes (Reserved), labels, payload (PDU), CRC. Wherein the format of the tag may be defined with reference to the following: the tag is 32 bits, with the high 16bit reservation, with only the low 16bit, and its position is between the reserved bytes and payload of the packet.

Claims (8)

1. A video display method, performed in a mobile terminal, comprising:
receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet;
analyzing the media packet to obtain a target data packet with the video type, and placing the target data packet at the tail of a video queue;
detecting the length of a target data packet in the video queue;
judging whether the length of a target data packet in the video queue is greater than a preset length;
if the length of the target data packets in the video queue is greater than the preset length, discarding N target data packets in the video queue, wherein N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets sequentially selected from the head of the video queue;
The target data packet is obtained from the head of the video queue through a decoding thread, and a decoder is called through the decoding thread to decode the target data packet, so that an original pixel stream is obtained;
and rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
2. The method of claim 1, further comprising, prior to said receiving, by the first receive thread, media packets sent by the streaming media server:
and creating the first receiving thread, wherein the first receiving thread and a second receiving thread which is created by the mobile terminal and is used for receiving signaling are asynchronous threads.
3. The method of claim 1, wherein the decoder is a hard decoder or a soft decoder.
4. The method of claim 1, wherein the drawing control comprises a surfacview control, and the original pixel stream is a YUV formatted pixel stream.
5. A video display apparatus provided in a mobile terminal, comprising:
the receiving module is used for receiving a media packet sent by the streaming media server through a first receiving thread, wherein the media packet comprises a data packet collected by a chairman participant terminal and the type of the data packet;
The analysis module is used for analyzing the media packet to obtain a target data packet with the type of video type, and placing the target data packet at the tail end of a video queue;
the detection module is used for detecting the length of the target data packet in the video queue;
the judging module is used for judging whether the length of the target data packet in the video queue is larger than a preset length;
the discarding module is configured to discard N target data packets in the video queue if the length of the target data packets in the video queue is greater than the preset length, where N is equal to a difference between the number of the target data packets in the video queue and the preset length, and the N target data packets include N data packets sequentially selected from the head of the video queue;
the decoding module is used for obtaining the target data packet from the head of the video queue through a decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and the display module is used for rendering the original pixel stream by adopting a drawing control, obtaining a rendered video picture and displaying the rendered video picture.
6. The apparatus of claim 5, further comprising, prior to said receiving, by the first receiving thread, media packets sent by the streaming media server:
the creating module is used for creating the first receiving thread, and the first receiving thread and a second receiving thread created by the mobile terminal and used for receiving signaling are asynchronous threads.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores thereon a computer program, which when executed by a processor implements the video display method according to any of claims 1 to 4.
8. A video display apparatus comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the video display method of any one of claims 1 to 4.
CN202010188270.5A 2020-03-17 2020-03-17 Video display method, device and readable storage medium Active CN111510759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188270.5A CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188270.5A CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111510759A CN111510759A (en) 2020-08-07
CN111510759B true CN111510759B (en) 2023-10-13

Family

ID=71872471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188270.5A Active CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111510759B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727175A (en) * 2021-08-11 2021-11-30 西安诺瓦星云科技股份有限公司 Multi-channel video source playback method and system and display control equipment
CN113870799B (en) * 2021-09-09 2022-11-18 瑞芯微电子股份有限公司 System display method and storage device of electronic ink screen device
CN114205662B (en) * 2021-12-13 2024-02-20 北京蔚领时代科技有限公司 Low-delay video rendering method and device of iOS (integrated operation system) terminal

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674486A (en) * 2009-09-29 2010-03-17 深圳市融创天下科技发展有限公司 Streaming media audio and video synchronization method and system
CN103108186A (en) * 2013-02-21 2013-05-15 中国对外翻译出版有限公司 Method of achieving high-definition transmission of videos
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
CN105430322A (en) * 2016-01-22 2016-03-23 深圳市华玺科技有限公司 Method and system for accessing video conference by unmanned aerial vehicle
CN106230839A (en) * 2016-08-03 2016-12-14 青岛海信宽带多媒体技术有限公司 The acceptance control method of Real Time Streaming and device
CN205901964U (en) * 2016-08-09 2017-01-18 陕西中联电科电子有限公司 Unmanned aerial vehicle image teletransmission interactive terminal
WO2017211112A1 (en) * 2016-06-06 2017-12-14 中兴通讯股份有限公司 Video live streaming method, user device, and unmanned aerial vehicle controller
CN108270732A (en) * 2016-12-30 2018-07-10 北京视联动力国际信息技术有限公司 A kind of Streaming Media processing method and system
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109756789A (en) * 2018-12-28 2019-05-14 视联动力信息技术股份有限公司 A kind of loss treating method and system of audio, video data packet
CN110062192A (en) * 2019-04-18 2019-07-26 视联动力信息技术股份有限公司 Data processing method, device and storage medium in video conference
CN110324580A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 A kind of monitor video playback method and device based on view networking
CN110324644A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 UAV Video live broadcasting method, system, electronic equipment and readable storage medium storing program for executing
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium
CN209676370U (en) * 2019-01-21 2019-11-22 华平信息技术股份有限公司 Portable satellite video terminal and video-signal system
CN110519627A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of synchronous method and device of audio data
CN110771160A (en) * 2018-07-23 2020-02-07 深圳市大疆创新科技有限公司 Video uniform display method, terminal device and machine-readable storage medium
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10123040B2 (en) * 2016-08-30 2018-11-06 Qualcomm Incorporated Intra-coded video frame caching for video telephony sessions

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674486A (en) * 2009-09-29 2010-03-17 深圳市融创天下科技发展有限公司 Streaming media audio and video synchronization method and system
CN103108186A (en) * 2013-02-21 2013-05-15 中国对外翻译出版有限公司 Method of achieving high-definition transmission of videos
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
CN105430322A (en) * 2016-01-22 2016-03-23 深圳市华玺科技有限公司 Method and system for accessing video conference by unmanned aerial vehicle
WO2017211112A1 (en) * 2016-06-06 2017-12-14 中兴通讯股份有限公司 Video live streaming method, user device, and unmanned aerial vehicle controller
CN106230839A (en) * 2016-08-03 2016-12-14 青岛海信宽带多媒体技术有限公司 The acceptance control method of Real Time Streaming and device
CN205901964U (en) * 2016-08-09 2017-01-18 陕西中联电科电子有限公司 Unmanned aerial vehicle image teletransmission interactive terminal
CN108270732A (en) * 2016-12-30 2018-07-10 北京视联动力国际信息技术有限公司 A kind of Streaming Media processing method and system
CN110519627A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of synchronous method and device of audio data
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium
CN110771160A (en) * 2018-07-23 2020-02-07 深圳市大疆创新科技有限公司 Video uniform display method, terminal device and machine-readable storage medium
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109756789A (en) * 2018-12-28 2019-05-14 视联动力信息技术股份有限公司 A kind of loss treating method and system of audio, video data packet
CN209676370U (en) * 2019-01-21 2019-11-22 华平信息技术股份有限公司 Portable satellite video terminal and video-signal system
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium
CN110062192A (en) * 2019-04-18 2019-07-26 视联动力信息技术股份有限公司 Data processing method, device and storage medium in video conference
CN110324644A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 UAV Video live broadcasting method, system, electronic equipment and readable storage medium storing program for executing
CN110324580A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 A kind of monitor video playback method and device based on view networking

Also Published As

Publication number Publication date
CN111510759A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111510759B (en) Video display method, device and readable storage medium
CN108881815B (en) Video data transmission method and device
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN108574818B (en) Information display method and device and server
CN110049271B (en) Video networking conference information display method and device
CN109889779B (en) Message out-of-order processing method and device
CN111447393B (en) Conference maintenance method, conference maintenance device and readable storage medium
CN108881948B (en) Method and system for video inspection network polling monitoring video
CN111464817A (en) Code rate control method and device and readable storage medium
CN110460804B (en) Conference data transmitting method, system, device and computer readable storage medium
CN109246486B (en) Method and device for framing
CN111327868B (en) Method, terminal, server, equipment and medium for setting conference speaking party roles
CN110049268B (en) Video telephone connection method and device
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110719432A (en) Data transmission method and device, electronic equipment and storage medium
CN110769297A (en) Audio and video data processing method and system
CN111478880B (en) Data processing method and device
CN111447407B (en) Method and device for transmitting monitoring resources
CN110677392B (en) Video data transmission method and device
CN110086773B (en) Audio and video data processing method and system
CN109889516B (en) Method and device for establishing session channel
CN109769012B (en) Web server access method and device
CN111447396A (en) Audio and video transmission method and device, electronic equipment and storage medium
CN110149306B (en) Media data processing method and device
CN108882049B (en) Data display method and video networking terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant