CN109587512B - Method and system for storing audio and video data - Google Patents

Method and system for storing audio and video data Download PDF

Info

Publication number
CN109587512B
CN109587512B CN201811280770.0A CN201811280770A CN109587512B CN 109587512 B CN109587512 B CN 109587512B CN 201811280770 A CN201811280770 A CN 201811280770A CN 109587512 B CN109587512 B CN 109587512B
Authority
CN
China
Prior art keywords
audio
video
video data
node server
storing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811280770.0A
Other languages
Chinese (zh)
Other versions
CN109587512A (en
Inventor
李学军
沈军
王洪超
郭忠平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201811280770.0A priority Critical patent/CN109587512B/en
Publication of CN109587512A publication Critical patent/CN109587512A/en
Application granted granted Critical
Publication of CN109587512B publication Critical patent/CN109587512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion

Abstract

The embodiment of the invention provides a method and a system for storing audio and video data, wherein the method comprises the following steps: the second video networking node server extracts the audio and video data from the audio and video data packet in real time; the second video network node server stores the audio and video data to a first position to obtain a first audio and video file; the second video network node server generates state information aiming at the operation of storing the audio and video data in the process of storing the audio and video data operation, and judges whether the first audio and video file meets the separation requirement or not; and when the first audio and video file meets the requirement, the second video networking node server finishes the current operation of storing the audio and video data, stores the audio and video data to the first position to obtain a second audio and video file, and the second audio and video file is the next audio and video file of the first audio and video file. The embodiment of the invention realizes the function of browsing historical audio and video data, can also generate state information used for representing the process of storage operation, and reduces the pressure of audio and video file transmission and display.

Description

Method and system for storing audio and video data
Technical Field
The invention relates to the technical field of video networking, in particular to a method and a system for storing audio and video data.
Background
The video network is a special network for transmitting high-definition video and a special protocol at high speed based on Ethernet hardware, is a higher-level form of the Internet and is a real-time network.
The video networking monitoring sharing server is also called a sharing platform and is mainly used for sharing audio and video data of a monitoring terminal in the video networking to the monitoring platform. However, the audio and video data of the monitoring terminal are real-time data, and have real-time performance and irreversibility. The monitoring platform can only display real-time audio and video data, historical audio and video data cannot be browsed, and the utilization rate of the audio and video data of the monitoring terminal is low.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a storage method of audio-video data and a corresponding storage system of audio-video data, which overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses a method for storing audio and video data, where the method is applied to a video network, the video network includes a first video network node server and a second video network node server, the first video network node server communicates with the second video network node server, the first video network node server is used to access a monitoring terminal in the video network and obtain audio and video data of the monitoring terminal, and the second video network node server is used to convert the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, and the method includes: the second video networking node server extracts the audio and video data from the audio and video data packet in real time; the second video networking node server stores the audio and video data extracted in real time to a preset first position to obtain a first audio and video file; the second video network node server generates state information aiming at the operation of storing the audio and video data in the process of storing the audio and video data operation and judges whether the first audio and video file meets the preset separation requirement or not; and when the first audio/video file meets the separation requirement, the second video networking node server finishes the current operation of storing the audio/video data, and stores the audio/video data extracted in real time to the first position to obtain a second audio/video file, wherein the second audio/video file is the next audio/video file of the first audio/video file.
Optionally, the second node server of the video network determines whether the first audio/video file meets a preset separation requirement in the process of storing audio/video data, including: the second video network node server judges whether the duration information of the first audio and video file is greater than or equal to a preset time threshold value or not and whether the first audio and video file contains a key frame or not in the process of audio and video data storage operation; and when the duration information of the first audio and video file is greater than or equal to the preset time threshold and the first audio and video file contains the key frame, the second video networking node server determines that the first audio and video file meets the separation requirement.
Optionally, the state information is a storing state, a storing completion state or a storing new file state; after the second node server generates state information for save audio/video data operation, the method further comprises: and the second video network node server returns the state information to the first video network node server according to a preset time period, and the first video network node server is used for displaying the state information.
Optionally, the method further comprises: the second video networking node server judges whether the residual space capacity of the first disk where the first position is located is smaller than or equal to a preset space threshold value or not in the process of storing audio and video data operation; and when the residual space capacity of the first disk where the first position is located is less than or equal to the preset space threshold, the second video networking node server keeps the current operation of storing the audio and video data, and stores the audio and video data obtained by real-time extraction to a preset second position located on a second disk.
Optionally, the extracting, by the second node server of the video network, the video data from the video data packet in real time includes: and the second video network node server deletes the packet header of the audio and video data packet to obtain the audio and video data.
The embodiment of the invention also discloses a storage system of audio and video data, which is applied to the video network, wherein the video network comprises a first video network node server and a second video network node server, the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing the monitoring terminal in the video network and acquiring the audio and video data of the monitoring terminal, the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, and the second video network node server comprises: the extraction module is used for extracting the audio and video data from the audio and video data packet in real time; the storage module is used for storing the audio and video data extracted in real time to a preset first position to obtain a first audio and video file; the generating module is used for generating state information aiming at the operation of storing the audio and video data when the storing module stores the audio and video data obtained by real-time extraction to a preset first position to obtain a first audio and video file; the judging module is used for judging whether the first audio and video file meets the preset separation requirement or not when the storing module stores the audio and video data obtained by real-time extraction to a preset first position to obtain the first audio and video file; the storage module is further configured to end a current operation of storing the audio/video data when the first audio/video file meets the separation requirement, and store the audio/video data extracted in real time to the first position to obtain a second audio/video file, where the second audio/video file is a next audio/video file of the first audio/video file.
Optionally, the determining module is configured to determine whether duration information of the first audio/video file is greater than or equal to a preset time threshold and whether the first audio/video file includes a key frame when the storing module stores the audio/video data extracted in real time to a preset first position to obtain a first audio/video file; and when the duration information of the first audio/video file is greater than or equal to the preset time threshold and the first audio/video file contains the key frame, determining that the first audio/video file meets the separation requirement.
Optionally, the state information is a storing state, a storing completion state or a storing new file state; the second video networking node server further comprises: and the callback module is used for returning the state information to the first video network node server according to a preset time period after the generation module generates the state information for storing the audio and video data operation, and the first video network node server is used for displaying the state information.
Optionally, the determining module is further configured to determine whether the remaining space capacity of the first disk where the first location is located is less than or equal to a preset space threshold when the storing module stores the audio/video data extracted in real time to a preset first location to obtain a first audio/video file; the storage module is further configured to, when the remaining space capacity of the first disk where the first position is located is less than or equal to the preset space threshold, maintain a current operation of storing the audio/video data, and store the audio/video data obtained by real-time extraction to a preset second position located on a second disk.
Optionally, the extracting module is configured to delete the header of the audio/video data packet to obtain the audio/video data.
The embodiment of the invention has the following advantages:
the embodiment of the invention is applied to the video network, and the video network can comprise a first video network node server and a second video network node server, wherein the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network and acquiring audio and video data of the monitoring terminal, and the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol.
In the embodiment of the invention, the second video networking node server extracts the audio and video data from the converted audio and video data packet in real time and stores the audio and video data extracted in real time to the preset first position to obtain the first audio and video file. In addition, in the process of storing the audio and video data, the second video network node server can also generate state information aiming at the storage operation, and meanwhile, the second video network node server can also judge whether the first audio and video file meets the preset separation requirement. When the first audio and video file meets the separation requirement, the second video network node server finishes the current storage operation, and stores the audio and video data extracted in real time to the first position to obtain a second audio and video file, namely the second video network node server stores the audio and video data extracted in real time as a new audio and video file.
By applying the characteristics of the video network, on one hand, the second video network node server can store audio and video data, namely real-time audio and video data of the monitoring terminal, and the function of browsing historical audio and video data is realized. On the other hand, the second video network node server can also generate the state information of the saving operation while saving the audio and video data, and the state information is used for representing the process of the saving operation. On the other hand, the second video network node server can also store the audio and video data as a new second audio and video file when the first audio and video file meets the separation requirement, so that the audio and video data extracted in real time is prevented from being stored as an audio and video file, and the transmission and display pressure of the audio and video file is reduced.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
fig. 5 is a flowchart illustrating steps of an embodiment of a method for storing audio/video data according to the present invention;
FIG. 6 is a schematic design diagram of a method for storing video data in a shared platform according to the present invention;
fig. 7 is a block diagram of a second node server in the embodiment of the storage system for audio and video data according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the internet of vision technology employs network Packet Switching to satisfy the demand of Streaming (which is interpreted as Streaming, continuous broadcasting, and is a data transmission technology that changes received data into a stable continuous stream and continuously transmits the stream, so that the sound heard or image seen by the user is very smooth, and the user can start browsing on the screen before the whole data is transmitted). The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (circled part), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204.
The network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module (downstream network interface module 301, upstream network interface module 302), the switching engine module 303, and the CPU module 304 are mainly included.
Wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) and obtaining the token generated by the code rate control module.
If the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, acquires the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 3 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (e.g. various protocol packets, multicast data packets, unicast data packets, etc.), there are at most 256 possibilities, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses.
The Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA).
The reserved byte consists of 2 bytes.
The payload part has different lengths according to types of different datagrams, and is 64 bytes if the type of the datagram is a variety of protocol packets, or is 1056 bytes if the type of the datagram is a unicast packet, but is not limited to the above 2 types.
The CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of a Label of Multi-Protocol Label switching (MPLS), and assuming that there are two connections between a device a and a device B, there are 2 labels for a packet from the device a to the device B, and 2 labels for a packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, the second video network node server extracts the audio and video data of the monitoring terminal from the audio and video data packet in real time according to the protocol of the video network, and then stores the audio and video data extracted in real time as an audio and video file and generates state information aiming at the storage operation. And moreover, whether the audio and video file meets the separation requirement can be judged, and when the audio and video file meets the separation requirement, the audio and video data extracted in real time are stored as a new audio and video file.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of a method for storing audio and video data according to the present invention is shown, where the method may be applied to a video network, and the video network may include a first video network node server and a second video network node server, where the first video network node server communicates with the second video network node server, the first video network node server is used to access a monitoring terminal in the video network and acquire audio and video data of the monitoring terminal, and the second video network node server is used to convert the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, where the method specifically includes the following steps:
and step 501, the second video network node server extracts the audio and video data from the audio and video data packet in real time.
In the embodiment of the invention, the second video network node server can be a video network monitoring sharing server, and the second video network node server extracts the audio and video data from the audio and video data packet supporting the real-time transmission protocol in real time. The audio/video data packet may be composed of a packet header and packet data, the packet header may include number information, length information, and the like of the audio/video data packet, and the packet data may include specific audio/video data. Therefore, when the second video network node server extracts the audio and video data from the audio and video data packet in real time, the packet header of the audio and video data packet can be deleted to obtain packet data, and specific audio and video data can be obtained. And the second video network node server extracts the obtained audio and video data from the audio and video data packet in real time, namely the real-time audio and video data of the monitoring terminal.
And 502, the second video network node server stores the audio and video data extracted in real time to a preset first position to obtain a first audio and video file.
In the embodiment of the present invention, the purpose of extracting the audio/video data in real time by the second node server of the video network in step 501 is to store the audio/video data, so that the second node server of the video network stores the audio/video data extracted in real time to the preset first position of the first disk, so as to obtain the first audio/video file. The first disk may be any one of a plurality of preset storage media, and the embodiment of the present invention does not specifically limit the capacity, material, brand, model, and the like of the first disk. The first location may be any location in the first disk, for example, under a certain drive, under a certain folder, and the like, and the embodiment of the present invention does not specifically limit the location and the like of the first location in the first disk. The first audio/video file may be audio/video data itself, or may be a file obtained by converting the audio/video data, for example, a first audio/video file is obtained by converting a format of the audio/video data, and a first audio/video file is obtained by compressing the audio/video data.
In step 503, the second node server of the video network generates state information for the operation of storing the audio and video data in the process of storing the audio and video data.
In the embodiment of the present invention, the second node server of the video networking may not only store the audio and video data, but also generate the state information for the operation of storing the audio and video data, where the state information may be a storing state, a storing completion state, a storing new file state, or the like.
In a preferred embodiment of the present invention, the purpose of the second node server generating the state information of the operation of storing the audio/video data is to indicate the process of the operation of storing the audio/video data through the state information, so that the second node server may also return the state information to the first node server according to a preset time period, so that the first node server may display the state information. For example, the second apparent network node server may return status information to the first apparent network node server at a time period of 1 minute.
In a preferred embodiment of the present invention, the second node server of the video networking may return the state information to the front-end display application program of the first node server of the video networking according to a preset time period, where the front-end display application program is mainly responsible for displaying the entire monitoring directory of the monitoring terminal, retrieving the audio/video data of the monitoring terminal, configuring the first node server of the video networking, and the like.
Step 504, the second video network node server judges whether the first audio/video file meets the preset separation requirement in the process of storing audio/video data operation, and if so, executes step 505; and if not, continuing to execute the operation of storing the audio and video data.
In the embodiment of the invention, the second video networking node server not only generates the state information aiming at the operation of storing the audio and video data, but also can judge whether the first audio and video file obtained by storing the audio and video data meets the preset separation requirement, wherein the separation requirement is used for judging whether the audio and video data is stored into a plurality of audio and video files, so that the capacity of each audio and video file is reduced, and the pressure of transmission, playing and the like of each audio and video file is reduced.
In a preferred embodiment of the present invention, when the second video network node server determines whether the first audio/video file meets the separation requirement, it may be determined whether the duration information of the first audio/video file is greater than or equal to a preset time threshold, and whether the first audio/video file contains a key frame. The preset time threshold may be 20 minutes, and the numerical value, unit, and the like of the preset time threshold are not particularly limited in the embodiment of the present invention. Whether a first audio/video file contains a key frame can be determined in a program stream packet header or a program stream system packet header of a program stream packet of audio/video data, and if the program stream packet header or the program stream system packet header contains identification information of the key frame, the first audio/video file can be determined to contain the key frame; if the program stream header or the program stream system header does not contain the identification information of the key frame, it may be determined that the first audio/video file does not contain the key frame. And when the duration information of the first audio/video file is greater than or equal to a preset time threshold and the first audio/video file contains the key frame, the second video network node server determines that the first audio/video file meets the separation requirement. And when the duration information of the first audio/video file is less than a preset time threshold value or the first audio/video file does not contain a key frame, the second video network node server determines that the first audio/video file does not meet the separation requirement.
And 505, the second video network node server finishes the current operation of storing the audio and video data, and stores the audio and video data extracted in real time to the first position to obtain a second audio and video file.
In the embodiment of the invention, when the first audio and video file meets the separation requirement, the second video network node server finishes the operation of storing the audio and video data as the first audio and video file, and stores the audio and video file obtained by real-time extraction as the second audio and video file, namely, the second video network node server stores the audio and video file obtained by real-time extraction as a new audio and video file. The new audio/video file, i.e., the second audio/video file, may still be located at the first location, and at this time, the second audio/video file may be located at the same location as the first audio/video file.
Step 506, in the process of storing audio and video data, the second node server of the video network judges whether the remaining space capacity of the first disk where the first position is located is less than or equal to a preset space threshold, if so, step 507 is executed; and if so, continuing to execute the operation of storing the audio and video data.
In the embodiment of the invention, the second video networking node server not only generates the state information aiming at the operation of storing the audio and video data while storing the audio and video data, but also can judge whether a first audio and video file obtained by storing the audio and video data meets the preset separation requirement or not, and also can judge whether the residual space capacity of a first disk used for storing the audio and video data is less than or equal to the preset space threshold or not. The preset spatial threshold may be 50MB or 2% of the entire spatial capacity, and the numerical value and unit of the preset spatial threshold are not specifically limited in the embodiment of the present invention.
And 507, the second video network node server keeps the current operation of storing the audio and video data, and stores the audio and video data extracted in real time to a preset second position of the second disk.
In the embodiment of the present invention, when the remaining space capacity of the first disk is less than or equal to the preset space threshold, the second video networking node server keeps the current operation of storing the audio/video data, but does not continue to store the audio/video data in the first position of the first disk, at this time, the state information of the storage operation of the audio/video file (which may be the first audio/video file, the second audio/video file, or another audio/video file) in the first position of the first disk may be a storage completion state, but stores the audio/video data in the second position of the second disk, at this time, the state information of the storage operation of the audio/video file (which may be the first audio/video file, the second audio/video file, or another audio/video file) in the second position of the second disk may be a storage state and a new file storage state.
Based on the above description about the embodiment of the storage method for audio and video data, a method for storing video data by a sharing platform is introduced below, as shown in fig. 6, the video networking monitoring and networking management and scheduling platform is divided into a front-end display application (web page) and a back-end service application, and the front-end display application is responsible for displaying the whole monitoring directory of the monitoring terminal, calling the video data of the monitoring terminal, configuring various configurations of the video networking monitoring and networking management and scheduling platform, and the like. The back-end service application program is responsible for the unified management of the monitoring terminals accessed in the whole video network and the docking service of the national standard (GB/T28181) platform monitoring system. The video networking monitoring sharing server is also called a sharing platform, and can be understood as a gateway, and is responsible for converting video data of the video networking monitoring terminals into video data packets of a real-time transmission protocol, so as to share the video data of the video networking to the monitoring platform based on the real-time transmission protocol. The video network monitoring and managing scheduling platform comprises a video network monitoring and managing scheduling platform, a back-end service application program and a sharing platform, wherein the back-end service application program of the video network monitoring and managing scheduling platform sends a video data packet supporting a real-time transmission protocol to the sharing platform, the sharing platform removes a packet header from the received video data packet of the real-time transmission protocol to obtain video data of a monitoring terminal, the video data is stored as a video file, state information (storing/storing completion/storing new files) in a storing operation process is added, and the storing operation actively reports the state information of the storing operation to the back-end service application program at. Since the video file saved each time is too large to be conveniently transmitted and played, in order to prevent the video file from being too long and too large, when the duration of the video file exceeds 20 minutes and contains key frames, the saving operation of the current video data is ended, and the video data is saved as a new video file. When the storage space for storing the video data is insufficient, the storage task is not stopped, and other disks are selected as storage paths to continue storing the video data.
The embodiment of the invention is applied to the video network, and the video network can comprise a first video network node server and a second video network node server, wherein the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network and acquiring audio and video data of the monitoring terminal, and the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol.
In the embodiment of the invention, the second video networking node server extracts the audio and video data from the converted audio and video data packet in real time and stores the audio and video data extracted in real time to the preset first position to obtain the first audio and video file. In addition, in the process of storing the audio and video data, the second video network node server can also generate state information aiming at the storage operation, and meanwhile, the second video network node server can also judge whether the first audio and video file meets the preset separation requirement. When the first audio and video file meets the separation requirement, the second video network node server finishes the current storage operation, and stores the audio and video data extracted in real time to the first position to obtain a second audio and video file, namely the second video network node server stores the audio and video data extracted in real time as a new audio and video file.
By applying the characteristics of the video network, on one hand, the second video network node server can store audio and video data, namely real-time audio and video data of the monitoring terminal, and the function of browsing historical audio and video data is realized. On the other hand, the second video network node server can also generate the state information of the saving operation while saving the audio and video data, and the state information is used for representing the process of the saving operation. On the other hand, the second video network node server can also store the audio and video data as a new second audio and video file when the first audio and video file meets the separation requirement, so that the audio and video data extracted in real time is prevented from being stored as an audio and video file, and the transmission and display pressure of the audio and video file is reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, a block diagram of a second node server of the video network in an embodiment of a storage system for audio and video data of the present invention is shown, where the system may be applied to a video network, and the video network may include a first node server of the video network and a second node server of the video network, where the first node server of the video network communicates with the second node server of the video network, the first node server of the video network is used to access a monitoring terminal in the video network and acquire audio and video data of the monitoring terminal, and the second node server of the video network is used to convert the audio and video data from the first node server of the video network into an audio and video data packet supporting a real-time transmission protocol, and the second node server of the video network in the system may specifically include the following modules:
and the extraction module 701 is used for extracting the audio and video data from the audio and video data packet in real time.
The storing module 702 is configured to store the audio and video data extracted in real time to a preset first position to obtain a first audio and video file.
The generating module 703 is configured to generate state information for an operation of storing the audio/video data when the storing module 702 stores the audio/video data extracted in real time to a preset first position to obtain a first audio/video file.
The determining module 704 is configured to determine whether the first audio/video file meets a preset separation requirement when the storing module 702 stores the audio/video data extracted in real time to a preset first position to obtain the first audio/video file.
The storage module 702 is further configured to, when the first audio/video file meets the separation requirement, end the current operation of storing the audio/video data, and store the audio/video data obtained by real-time extraction to the first location to obtain a second audio/video file, where the second audio/video file is a next audio/video file of the first audio/video file.
In a preferred embodiment of the present invention, the determining module 704 is configured to determine whether duration information of a first audio/video file is greater than or equal to a preset time threshold and the first audio/video file includes a key frame, when the storing module 702 stores the audio/video data extracted in real time to a preset first location to obtain the first audio/video file; and when the duration information of the first audio/video file is greater than or equal to a preset time threshold and the first audio/video file contains the key frame, determining that the first audio/video file meets the separation requirement.
In a preferred embodiment of the present invention, the state information is a storing state, a storing completion state or a storing new file state; the second video networking node server further comprises: the callback module 705 is configured to return the state information to the first node server of the video network according to a preset time period after the generation module 703 generates the state information for the operation of storing the audio and video data, where the first node server of the video network is used to display the state information.
In a preferred embodiment of the present invention, the determining module 704 is further configured to determine whether the remaining space capacity of the first disk where the first location is located is less than or equal to a preset space threshold when the storing module 702 stores the audio/video data extracted in real time to the preset first location to obtain the first audio/video file; the storing module 702 is further configured to, when the remaining space capacity of the first disk where the first location is located is less than or equal to a preset space threshold, maintain a current operation of storing the audio/video data, and store the audio/video data extracted in real time to a preset second location located on the second disk.
In a preferred embodiment of the present invention, the extracting module 701 is configured to delete a header of the audio/video data packet to obtain the audio/video data.
For the system embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for storing audio and video data and the system for storing audio and video data provided by the invention are described in detail, and specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for storing audio/video data is applied to video network, the video network comprises a first video network node server and a second video network node server, the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network, the monitoring terminals are uniformly managed through a video network monitoring and networking management and dispatching platform, the audio and video data of the monitoring terminals are dispatched to the first video network node server, the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol so as to share the audio and video data packet to a monitoring platform based on the real-time transmission protocol, and the method comprises the following steps:
the second video networking node server extracts the audio and video data from the audio and video data packet in real time;
the second video networking node server stores the audio and video data extracted in real time to a preset first position to obtain a first audio and video file;
the second video network node server generates state information aiming at the operation of storing the audio and video data in the process of storing the audio and video data operation and judges whether the first audio and video file meets the preset separation requirement or not; the meeting of the preset separation requirement means that the duration information of the first audio/video file is greater than or equal to a preset time threshold, and the first audio/video file contains a key frame;
when the first audio/video file meets the separation requirement, the second video networking node server finishes the current operation of storing the audio/video data, and stores the audio/video data extracted in real time to the first position to obtain a second audio/video file, wherein the second audio/video file is the next audio/video file of the first audio/video file; the first position is any position in a first magnetic disk, and the first magnetic disk is any one of a plurality of preset storage media.
2. The method for storing audio/video data according to claim 1, wherein the step of judging whether the first audio/video file meets a preset separation requirement by the second node server in the process of storing the audio/video data comprises:
the second video network node server judges whether the duration information of the first audio and video file is greater than or equal to a preset time threshold value or not and whether the first audio and video file contains a key frame or not in the process of audio and video data storage operation;
and when the duration information of the first audio and video file is greater than or equal to the preset time threshold and the first audio and video file contains the key frame, the second video networking node server determines that the first audio and video file meets the separation requirement.
3. The method for storing audio/video data according to claim 1, wherein the status information is a storing status, a storing completion status or a storing new file status;
after the second node server generates state information for save audio/video data operation, the method further comprises:
and the second video network node server returns the state information to the first video network node server according to a preset time period, and the first video network node server is used for displaying the state information.
4. The method for storing audio-visual data according to claim 1, characterized in that the method further comprises:
the second video networking node server judges whether the residual space capacity of the first disk where the first position is located is smaller than or equal to a preset space threshold value or not in the process of storing audio and video data operation;
and when the residual space capacity of the first disk where the first position is located is less than or equal to the preset space threshold, the second video networking node server keeps the current operation of storing the audio and video data, and stores the audio and video data obtained by real-time extraction to a preset second position located on a second disk.
5. The method for storing audio/video data according to any one of claims 1 to 4, wherein the second video networking node server extracts the audio/video data from the audio/video data packet in real time, and the method includes:
and the second video network node server deletes the packet header of the audio and video data packet to obtain the audio and video data.
6. The system for storing the audio and video data is applied to a video network, the video network comprises a first video network node server and a second video network node server, the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network, the monitoring terminals are uniformly managed through a video network monitoring and networking management and dispatching platform, the audio and video data of the monitoring terminals are dispatched to the first video network node server, the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, for sharing to a real-time transport protocol based monitoring platform, the second node server of the video network comprises:
the extraction module is used for extracting the audio and video data from the audio and video data packet in real time;
the storage module is used for storing the audio and video data extracted in real time to a preset first position to obtain a first audio and video file;
the generating module is used for generating state information aiming at the operation of storing the audio and video data when the storing module stores the audio and video data obtained by real-time extraction to a preset first position to obtain a first audio and video file;
the judging module is used for judging whether the first audio and video file meets the preset separation requirement or not when the storing module stores the audio and video data obtained by real-time extraction to a preset first position to obtain the first audio and video file; the meeting of the preset separation requirement means that the duration information of the first audio/video file is greater than or equal to a preset time threshold, and the first audio/video file contains a key frame;
the storage module is further configured to end a current operation of storing the audio/video data when the first audio/video file meets the separation requirement, and store the audio/video data extracted in real time to the first position to obtain a second audio/video file, where the second audio/video file is a next audio/video file of the first audio/video file; the first position is any position in a first magnetic disk, and the first magnetic disk is any one of a plurality of preset storage media.
7. The system for storing audio/video data according to claim 6, wherein the determining module is configured to determine whether duration information of a first audio/video file is greater than or equal to a preset time threshold and whether the first audio/video file contains a key frame, when the storing module stores the audio/video data obtained by real-time extraction to a preset first position to obtain the first audio/video file; and when the duration information of the first audio/video file is greater than or equal to the preset time threshold and the first audio/video file contains the key frame, determining that the first audio/video file meets the separation requirement.
8. The system for storing audio-video data according to claim 6, wherein the status information is a storing status, a storing completion status or a storing new file status;
the second video networking node server further comprises:
and the callback module is used for returning the state information to the first video network node server according to a preset time period after the generation module generates the state information for storing the audio and video data operation, and the first video network node server is used for displaying the state information.
9. The system for storing audio/video data according to claim 6, wherein the determining module is further configured to determine whether the remaining space capacity of the first disk in which the first location is located is less than or equal to a preset space threshold value when the storing module stores the audio/video data obtained by real-time extraction to a preset first location to obtain a first audio/video file;
the storage module is further configured to, when the remaining space capacity of the first disk where the first position is located is less than or equal to the preset space threshold, maintain a current operation of storing the audio/video data, and store the audio/video data obtained by real-time extraction to a preset second position located on a second disk.
10. The system for storing audio/video data according to any one of claims 6 to 9, wherein the extraction module is configured to delete a packet header of the audio/video data packet to obtain the audio/video data.
CN201811280770.0A 2018-10-30 2018-10-30 Method and system for storing audio and video data Active CN109587512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811280770.0A CN109587512B (en) 2018-10-30 2018-10-30 Method and system for storing audio and video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811280770.0A CN109587512B (en) 2018-10-30 2018-10-30 Method and system for storing audio and video data

Publications (2)

Publication Number Publication Date
CN109587512A CN109587512A (en) 2019-04-05
CN109587512B true CN109587512B (en) 2020-10-02

Family

ID=65920891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811280770.0A Active CN109587512B (en) 2018-10-30 2018-10-30 Method and system for storing audio and video data

Country Status (1)

Country Link
CN (1) CN109587512B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686072A (en) * 2013-11-15 2014-03-26 北京视联动力国际信息技术有限公司 Video internet video monitoring method and system, protocol conversion server, and video internet server
CN106101595A (en) * 2016-07-12 2016-11-09 中科创达软件股份有限公司 A kind of segmentation Video data processing method, system and terminal
KR20170058301A (en) * 2015-11-18 2017-05-26 브라보 아이디어스 디지털 코., 엘티디. Method for identifying a target object in a video file
CN107995499A (en) * 2017-12-04 2018-05-04 腾讯科技(深圳)有限公司 Processing method, device and the relevant device of media data
CN108462678A (en) * 2017-02-21 2018-08-28 北京视联动力国际信息技术有限公司 A kind of method and apparatus of checking monitoring video recording

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686072A (en) * 2013-11-15 2014-03-26 北京视联动力国际信息技术有限公司 Video internet video monitoring method and system, protocol conversion server, and video internet server
KR20170058301A (en) * 2015-11-18 2017-05-26 브라보 아이디어스 디지털 코., 엘티디. Method for identifying a target object in a video file
CN106101595A (en) * 2016-07-12 2016-11-09 中科创达软件股份有限公司 A kind of segmentation Video data processing method, system and terminal
CN108462678A (en) * 2017-02-21 2018-08-28 北京视联动力国际信息技术有限公司 A kind of method and apparatus of checking monitoring video recording
CN107995499A (en) * 2017-12-04 2018-05-04 腾讯科技(深圳)有限公司 Processing method, device and the relevant device of media data

Also Published As

Publication number Publication date
CN109587512A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN110149262B (en) Method and device for processing signaling message and storage medium
CN111193788A (en) Audio and video stream load balancing method and device
CN109547728B (en) Recorded broadcast source conference entering and conference recorded broadcast method and system
CN109842519B (en) Method and device for previewing video stream
CN109474715B (en) Resource configuration method and device based on video network
CN109379254B (en) Network connection detection method and system based on video conference
CN110572607A (en) Video conference method, system and device and storage medium
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109246135B (en) Method and system for acquiring streaming media data
CN109040656B (en) Video conference processing method and system
CN110475131B (en) Terminal connection method, server and terminal
CN109743284B (en) Video processing method and system based on video network
CN109151061B (en) Data storage method and device
CN109302384B (en) Data processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110446058B (en) Video acquisition method, system, device and computer readable storage medium
CN110134892B (en) Loading method and system of monitoring resource list
CN110022500B (en) Packet loss processing method and device
CN109963107B (en) Audio and video data display method and system
CN109698953B (en) State detection method and system for video network monitoring equipment
CN109474661B (en) Method and system for processing network request event
CN110113555B (en) Video conference processing method and system based on video networking
CN111212255A (en) Monitoring resource obtaining method and device and computer readable storage medium
CN110536148B (en) Live broadcasting method and equipment based on video networking
CN110557411A (en) video stream processing method and device based on video network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant