CN108966038B - Video data processing method and video networking cache server - Google Patents

Video data processing method and video networking cache server Download PDF

Info

Publication number
CN108966038B
CN108966038B CN201711445197.XA CN201711445197A CN108966038B CN 108966038 B CN108966038 B CN 108966038B CN 201711445197 A CN201711445197 A CN 201711445197A CN 108966038 B CN108966038 B CN 108966038B
Authority
CN
China
Prior art keywords
video data
video
frames
data
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711445197.XA
Other languages
Chinese (zh)
Other versions
CN108966038A (en
Inventor
胡贵超
王艳辉
潘廷勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201711445197.XA priority Critical patent/CN108966038B/en
Publication of CN108966038A publication Critical patent/CN108966038A/en
Application granted granted Critical
Publication of CN108966038B publication Critical patent/CN108966038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/631Multimode Transmission, e.g. transmitting basic layers and enhancement layers of the content over different transmission paths or transmitting with different error corrections, different keys or with different transmission protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a video data processing method and a video networking cache server, wherein the method comprises the following steps: acquiring the data quantity sum of the N frames of video data; when the sum of the data amount of the N frames of video data is larger than a first preset threshold value, extracting M frames of video data from the N frames of video data; and sending the M frames of video data to a video network terminal. According to the embodiment of the invention, the problems of video frame loss and blockage in the case of unstable network in the video data processing method in the prior art are solved.

Description

Video data processing method and video networking cache server
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a video data processing method and a video networking cache server.
Background
At present, with the rapid development of network technologies, bidirectional communications such as video conferences and video teaching are widely popularized in the aspects of life, work, learning and the like of users, and video networking technologies are increasingly applied to various technical fields.
In an application scenario of security monitoring by using a video networking technology, an IPC (Internet Protocol Camra) at one side of the Internet is required to acquire video data, and send out video data in an IP format based on a TCP/IP (Transmission Control Protocol/Internet Protocol ) Protocol, and an intermediate layer converts the video data into video data in a video networking data format and sends out the video data to a video networking terminal based on the video networking Protocol. During this conversion process, the middle layer needs to cache a large amount of video data locally.
However, the network situation of the internet is not stable. For example, the internet often has the situations of network disconnection, jitter and the like, so that no video data is sent to the video network terminal, and the video displayed by the video network terminal loses frames; when the internet recovers to be normal, a large amount of video data may flow into the middle layer in the internet in a short time, and if the middle layer sends out a large amount of video data at the same time, the video networking terminal cannot process a large amount of video data in a short time, so that the problems of video display blockage and the like are caused.
Therefore, the video data processing method in the prior art has the problems of video frame loss and video blocking under the condition of unstable network.
Disclosure of Invention
The invention provides a video data processing method and a video network cache server, which are used for solving the problems of video frame loss and blockage in the case of unstable network in the video data processing method in the prior art.
In order to solve the above technical problem, an embodiment of the present invention provides a video data processing method, which is applied to a video networking cache server, where the video networking cache server caches N frames of video data, and the video data has a data volume, and the method includes:
acquiring the data quantity sum of the N frames of video data;
when the sum of the data amount of the N frames of video data is larger than a first preset threshold value, extracting M frames of video data from the N frames of video data; wherein M is more than 0 and less than N;
and sending the M frames of video data to a video network terminal.
Optionally, the video data has a corresponding buffering time, and the method further includes:
acquiring initial frame video data from the N frames of video data;
acquiring initial caching time corresponding to the initial frame video data;
and if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data quantity sum of the N frames of video data is less than a third preset threshold value, sending the N frames of video data to the video network terminal.
Optionally, the video data has a corresponding decoding frame rate, and before the step of extracting M frames of video data from the N frames of video data, the method further includes:
determining a value M according to the decoding frame rate of the video data;
the step of extracting M frames of video data from the N frames of video data includes:
sequencing the N frames of video data according to the cache time;
and extracting the video data of M frames before sequencing as the video data of the M frames.
Optionally, the step of sending the M frames of video data to a video network terminal includes:
respectively adding video network packet header information to the M frames of video data;
and sending the M frames of video data added with the video network header information to the video network terminal.
Optionally, the video networking cache server includes an intermediate cache layer, and the method further includes:
receiving at least one frame of video data;
and buffering the at least one frame of video data in the intermediate buffer layer.
In order to solve the above technical problem, an embodiment of the present invention provides a cache server for a video network, where the cache server caches N frames of video data, and the video data has a data size, and the cache server for a video network includes:
the data volume sum obtaining module is used for obtaining the data volume sum of the N frames of video data;
the video data extraction module is used for extracting M frames of video data from the N frames of video data when the sum of the data amount of the N frames of video data is greater than a first preset threshold value; wherein M is more than 0 and less than N;
and the first video data sending module is used for sending the M frames of video data to the video network terminal.
Optionally, the video data has a corresponding cache time, and the video network cache server further includes:
the initial frame video data acquisition module is used for acquiring initial frame video data from the N frames of video data;
an initial buffer time obtaining module, configured to obtain an initial buffer time corresponding to the initial frame video data;
and the second video data sending module is used for sending the N frames of video data to the video networking terminal if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data volume sum of the N frames of video data is less than a third preset threshold value.
Optionally, the video data has a corresponding decoding frame rate, and the video networking cache server further includes:
a value M determining module, configured to determine a value M according to a decoding frame rate of the video data;
the video data extraction module comprises:
the data sorting submodule is used for sorting the N frames of video data according to the cache time;
and the M-frame video data extraction submodule is used for extracting the video data of M before sequencing as the M-frame video data.
Optionally, the first video data sending module includes:
the video network packet header information adding submodule is used for respectively adding video network packet header information aiming at the M frames of video data;
and the video data sending submodule is used for sending the M frames of video data added with the video network header information to the video network terminal.
Optionally, the video network cache server includes an intermediate cache layer, and the video network cache server further includes:
the video data receiving module is used for receiving at least one frame of video data;
and the video data caching module is used for caching the at least one frame of video data in the middle caching layer.
According to the embodiment of the invention, when the data amount sum of the cached N frames of video data is greater than the preset threshold value, M frames of video data are extracted from the N frames of video data, and the M frames of video data are sent to the video network terminal. Even under the condition that the network is unstable, the condition that no video data are sent to the video network terminal is avoided, and meanwhile, the condition that a large amount of video data are sent to the video network terminal in a short time is avoided, so that the problems of video frame loss and blocking under the condition that the network is unstable in the video data processing method in the prior art are solved through the uniform packet mechanism.
Drawings
Fig. 1 is a flowchart of a video data processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video data processing method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a video network cache server according to a third embodiment of the present invention;
fig. 4 is a block diagram of a video network cache server according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video data interaction system according to an embodiment of the present invention;
FIG. 6 is a networking schematic of a video network of the present invention;
FIG. 7 is a diagram of a hardware architecture of a node server according to the present invention;
fig. 8 is a schematic diagram of a hardware architecture of an access switch of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Fig. 1 is a flowchart of a video data processing method according to an embodiment of the present invention, which is applied to a video network cache server, where the video network cache server caches N frames of video data, and the video data has a data size, and the method may specifically include the following steps:
and step 110, acquiring the data volume sum of the N frames of video data.
It should be noted that the video data processing method provided by the embodiment of the present invention may be applied to a video network cache server. The video network cache server can be used for converting IP format video data collected by IPC in the Internet into video network format video data and sending the video network format video data to a video network terminal in the video network.
The video network cache server can receive N frames of video data collected by Internet IPC and cache the data in the middle cache layer.
In a specific implementation, the video network cache server may count the sum of the data volumes of the N frames of video data, so that the video network cache server obtains the sum of the data volumes of the N frames of video data.
Step 120, when the sum of the data amount of the N frames of video data is greater than a first preset threshold, extracting M frames of video data from the N frames of video data; wherein M is more than 0 and less than N.
In a specific implementation, the video network cache server may compare the sum of the data amounts of the N frames of video data with a first preset threshold.
A person skilled in the art may set a specific value of the first preset threshold according to actual situations, which is not limited in the embodiment of the present invention. For example, for a video stream with a bitrate of 2M/sec, the first preset threshold may be set to 1M, i.e. 0.5 sec of video data is buffered.
When the data volume of the N frames of video data cached by the video networking cache server is always larger than a first preset threshold value, the N frames of video data can be sent according to a uniform package mechanism. More specifically, M frames of video data may be extracted from N frames of video data, where M is less than N.
And step 130, sending the M frames of video data to the video network terminal.
In a specific implementation, after the M frames of video data are extracted, the M frames of video data may be sent to the video networking terminal. In practical application, if the M frames of video data are in IP format, they can be converted into video data in video network format and then transmitted.
It should be added that, when extracting and transmitting M frames of video data, the extracted and transmitted M frames of video data may be determined according to the frame rate or the code rate. For example, if the frame rate of the video stream is 30 frames/second, 30 frames of video data may be extracted and transmitted to the video network terminal every second. For another example, if the bitrate of the video stream is 2M/s, 2M video data can be extracted and transmitted to the terminal of the video network every second. In practical applications, the extraction and transmission are usually performed according to the frame rate of the video stream, because the video decoder of the video networking terminal usually decodes according to the complete multi-frame video data.
According to the embodiment of the invention, when the data amount sum of the cached N frames of video data is greater than the preset threshold value, M frames of video data are extracted from the N frames of video data, and the M frames of video data are sent to the video network terminal. Even under the condition that the network is unstable, the condition that no video data are sent to the video network terminal is avoided, and meanwhile, the condition that a large amount of video data are sent to the video network terminal in a short time is avoided, so that the problems of video frame loss and blocking under the condition that the network is unstable in the video data processing method in the prior art are solved through the uniform packet mechanism.
Example two
Fig. 2 is a flowchart of a video data processing method provided in the second embodiment of the present invention, and is applied to a video network cache server, where the video network cache server caches N frames of video data, and the video data has a data size, and the method may specifically include the following steps:
step 210, receiving at least one frame of video data; the video network cache server comprises an intermediate cache layer.
In specific implementation, a network camera in the internet can collect video data and send one or more frames of video data to a video networking cache server based on an internet protocol. Thus, the video network cache server may receive video data from the network cameras.
Step 220, buffering the at least one frame of video data in the intermediate buffer layer.
The video network cache server can be provided with an intermediate cache layer, and the video network cache server can cache the received video data in the intermediate cache layer so as to count the sum of data quantity of the multi-frame video data cached in the intermediate cache layer.
Step 230, obtaining the data amount sum of the N frames of video data;
optionally, the video data has a corresponding buffering time, and after the step 230, the method may further include:
acquiring initial frame video data from the N frames of video data;
acquiring initial caching time corresponding to the initial frame video data;
and if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data quantity sum of the N frames of video data is less than a third preset threshold value, sending the N frames of video data to the video network terminal.
In a specific implementation, the video network cache server may record the cache time of each frame of video data, and according to the cache time of the video data, the video network cache server may determine the video data with the earliest cache time in the currently cached N frames of video data, as the initial frame of video data.
For the initial frame video data, the buffering time of the initial frame video data may be obtained as the initial buffering time. Then, the difference between the initial buffering time and the current time is calculated. In addition, the sum of the data amounts of the N frames of video data may be compared with a third preset threshold. When the difference value between the initial caching time and the current time is larger than a second preset threshold value, and meanwhile, the sum of the data amount of the N frames of video data is smaller than a third preset threshold value, it is indicated that the current internet is possibly disconnected, or the network camera fails and does not send out video data. Therefore, the video data of a specific data volume does not need to wait for buffering, and the N frames of video data can be directly sent to the video network terminal.
Step 240, when the sum of the data amount of the N frames of video data is greater than a first preset threshold, extracting M frames of video data from the N frames of video data; wherein M is more than 0 and less than N.
Optionally, the video data has a corresponding decoding frame rate, and before the step 240, the method may further include:
determining a value M according to the decoding frame rate of the video data;
the step 240 may specifically include:
step 241, sorting the N frames of video data according to the cache time;
and 242, extracting the video data of M before sequencing as the video data of M frames.
It should be noted that, in an actual application scenario, a video decoder of the terminal of the video network generally decodes according to the complete multi-frame video data, and therefore, the value M may be determined according to the decoding frame rate corresponding to the video data. For example, the decoding frame rate of the video stream is 30 frames/sec, and thus M is 30. When N frames of video data with a certain data size are received, the N frames of video data may be sorted according to the buffer time, and the top M frames of video data sorted in the front may be used as the M frames of video data. And sending the M frames of video data to a video network terminal, wherein the video network terminal can decode the complete multi-frame video data.
And step 250, sending the M frames of video data to the video network terminal.
Optionally, the step 250 may specifically include:
step 251, adding video network header information to the M frames of video data respectively;
and 252, sending the M frames of video data added with the video network header information to the video network terminal.
In specific implementation, when sending video data to a video network terminal, video network header information may be added to the video data, so as to convert the video data in the IP format into video data in the video network format, and send the video data to the video network terminal through the video network.
EXAMPLE III
Fig. 3 is a block diagram of a video network cache server according to a third embodiment of the present invention, where the video network cache server caches N frames of video data, where the video data has a data size, and the video network cache server 300 may specifically include the following modules:
a data amount sum obtaining module 310, configured to obtain a data amount sum of the N frames of video data;
a video data extracting module 320, configured to extract M frames of video data from the N frames of video data when a sum of data amounts of the N frames of video data is greater than a first preset threshold; wherein M is more than 0 and less than N;
the first video data sending module 330 is configured to send the M frames of video data to the video network terminal.
According to the embodiment of the invention, when the data amount sum of the cached N frames of video data is greater than the preset threshold value, M frames of video data are extracted from the N frames of video data, and the M frames of video data are sent to the video network terminal. Even under the condition that the network is unstable, the condition that no video data are sent to the video network terminal is avoided, and meanwhile, the condition that a large amount of video data are sent to the video network terminal in a short time is avoided, so that the problems of video frame loss and blocking under the condition that the network is unstable in the video data processing method in the prior art are solved through the uniform packet mechanism.
Example four
Fig. 4 is a block diagram of a video network cache server according to a fourth embodiment of the present invention, where the video network cache server caches N frames of video data, where the video data has a data size, and the video network cache server 400 may specifically include the following modules:
a video data receiving module 410, configured to receive at least one frame of video data; the video networking cache server comprises an intermediate cache layer;
a video data buffering module 420, configured to buffer the at least one frame of video data in the intermediate buffering layer.
A data amount sum obtaining module 430, configured to obtain a data amount sum of the N frames of video data;
the video data extraction module 440 is configured to, when the sum of the data amounts of the N frames of video data is greater than a first preset threshold, extract M frames of video data from the N frames of video data; wherein M is more than 0 and less than N;
a first video data sending module 450, configured to send the M frames of video data to the video network terminal.
Optionally, the video data has a corresponding buffering time, and the video network caching server 400 may further include:
the initial frame video data acquisition module is used for acquiring initial frame video data from the N frames of video data;
an initial buffer time obtaining module, configured to obtain an initial buffer time corresponding to the initial frame video data;
and the second video data sending module is used for sending the N frames of video data to the video networking terminal if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data volume sum of the N frames of video data is less than a third preset threshold value.
Optionally, the video data has a corresponding decoding frame rate, and the video networking cache server 400 may further include:
a value M determining module, configured to determine a value M according to a decoding frame rate of the video data;
the video data extraction module 440 may specifically include:
the data sorting submodule 441 is configured to sort the N frames of video data according to the buffering time;
the M-frame video data extracting sub-module 442 is configured to extract video data of M frames before the sorting as the M-frame video data.
Optionally, the first video data sending module 450 may specifically include:
a video network header information adding sub-module 451, configured to add video network header information to the M frames of video data, respectively;
and the video data sending submodule 452 is configured to send the M frames of video data added with the video network header information to the video network terminal.
Since the processing procedure described in the device embodiment has been described in detail in the method embodiment, it is not described herein again.
In order to facilitate those skilled in the art to understand the video data processing method according to the embodiment of the present invention, the following description will be made with reference to the specific example of fig. 5.
Fig. 5 is a schematic structural diagram of a video data interaction system according to an embodiment of the present invention. It can be seen from the figure that the network camera collects video data and sends the video data to the middle layer through the internet, the middle layer performs processing such as caching and format conversion on the video data, and the video data after the conversion processing is sent to the video networking terminal through the video networking.
It should be added that the above embodiments of the present invention can be applied to a communication network of a video network. The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 6, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 7, the system mainly includes a network interface module 701, a switching engine module 702, a CPU module 703, and a disk array module 704;
the network interface module 701, the CPU module 703 and the disk array module 704 enter the switching engine module 702; the switching engine module 702 performs an operation of looking up the address table 705 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a corresponding queue of the packet buffer 706 based on the packet's steering information; if the queue of the packet buffer 706 is nearly full, discard; the switching engine module 702 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 704 mainly implements control over the hard disk, including initialization, read-write, and other operations; the CPU module 703 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 705 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 704.
The access switch:
as shown in fig. 8, the network interface module mainly includes a network interface module (a downlink network interface module 801, an uplink network interface module 802), a switching engine module 803, and a CPU module 804;
wherein, the packet (uplink data) coming from the downlink network interface module 801 enters the packet detection module 805; the packet detection module 805 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 803, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 802 enters the switching engine module 803; the incoming data packet from the CPU module 804 enters the switching engine module 803; the switching engine module 803 performs an operation of looking up the address table 806 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 803 is from the downstream network interface to the upstream network interface, the packet is stored in a queue of the corresponding packet buffer 807 in association with a stream-id; if the queue of the packet buffer 807 is nearly full, it is discarded; if the packet entering the switching engine module 803 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 807 according to the packet guiding information; if the queue of the packet buffer 807 is nearly full, it is discarded.
The switching engine module 803 polls all packet buffer queues, which in this embodiment of the invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 808 is configured by the CPU module 804, and generates tokens for packet buffer queues from all downlink network interfaces to uplink network interfaces at programmable intervals to control the rate of uplink forwarding.
The CPU module 804 is mainly responsible for protocol processing with the node server, configuration of the address table 806, and configuration of the code rate control module 808.
Ethernet protocol conversion gateway
As shown in fig. 9, the system mainly includes a network interface module (a downlink network interface module 901 and an uplink network interface module 902), a switching engine module 903, a CPU module 904, a packet detection module 905, a rate control module 908, an address table 906, a packet buffer 907, a MAC adding module 909, and a MAC deleting module 910.
Wherein, the data packet coming from the downlink network interface module 901 enters the packet detection module 905; the packet detection module 905 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deleting module 910 subtracts MAC DA, MAC SA, length or frame type (2byte), and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 901 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
Figure BDA0001527360620000151
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
Figure BDA0001527360620000161
namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a server, terminal, or the like, according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (6)

1. A video data processing method is applied to a video network cache server, and is characterized in that the video network cache server caches N frames of video data, and the video data has a data volume, and the method comprises the following steps:
acquiring the data quantity sum of the N frames of video data;
when the sum of the data amount of the N frames of video data is larger than a first preset threshold value, extracting M frames of video data from the N frames of video data; wherein M is more than 0 and less than N;
sending the M frames of video data to a video networking terminal;
wherein the video data has a corresponding buffer time;
the video data having a corresponding decoding frame rate, the method further comprising, prior to the step of extracting M frames of video data from the N frames of video data:
determining a value M according to the decoding frame rate of the video data;
the step of extracting M frames of video data from the N frames of video data includes:
sequencing the N frames of video data according to the cache time;
extracting the video data of M frames before sequencing as the video data of the M frames;
the method further comprises the following steps:
acquiring initial frame video data from the N frames of video data;
acquiring initial caching time corresponding to the initial frame video data;
and if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data quantity sum of the N frames of video data is less than a third preset threshold value, sending the N frames of video data to the video network terminal.
2. The method of claim 1, wherein the step of sending the M frames of video data to a video networking terminal comprises:
respectively adding video network packet header information to the M frames of video data;
and sending the M frames of video data added with the video network header information to the video network terminal.
3. The method of claim 1, wherein the video networking cache server comprises an intermediate cache layer, the method further comprising:
receiving at least one frame of video data;
and buffering the at least one frame of video data in the intermediate buffer layer.
4. A video networking cache server, wherein the video networking cache server caches N frames of video data, the video data having a data volume, the video networking cache server comprising:
the data volume sum obtaining module is used for obtaining the data volume sum of the N frames of video data;
the video data extraction module is used for extracting M frames of video data from the N frames of video data when the sum of the data amount of the N frames of video data is greater than a first preset threshold value; wherein M is more than 0 and less than N;
the first video data sending module is used for sending the M frames of video data to a video network terminal;
wherein the video data has a corresponding buffer time;
the video data has a corresponding decoding frame rate, and the video networking cache server further comprises:
a value M determining module, configured to determine a value M according to a decoding frame rate of the video data;
the video data extraction module comprises:
the data sorting submodule is used for sorting the N frames of video data according to the cache time;
the M-frame video data extraction submodule is used for extracting M video data before sequencing as the M-frame video data;
the video networking cache server further comprises:
the initial frame video data acquisition module is used for acquiring initial frame video data from the N frames of video data;
an initial buffer time obtaining module, configured to obtain an initial buffer time corresponding to the initial frame video data;
and the second video data sending module is used for sending the N frames of video data to the video networking terminal if the difference value between the initial caching time and the current time is greater than a second preset threshold value and the data volume sum of the N frames of video data is less than a third preset threshold value.
5. The video networking cache server of claim 4, wherein the first video data sending module comprises:
the video network packet header information adding submodule is used for respectively adding video network packet header information aiming at the M frames of video data;
and the video data sending submodule is used for sending the M frames of video data added with the video network header information to the video network terminal.
6. The internet of view cache server of claim 4, wherein the internet of view cache server comprises an intermediate cache layer, the internet of view cache server further comprising:
the video data receiving module is used for receiving at least one frame of video data;
and the video data caching module is used for caching the at least one frame of video data in the middle caching layer.
CN201711445197.XA 2017-12-27 2017-12-27 Video data processing method and video networking cache server Active CN108966038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711445197.XA CN108966038B (en) 2017-12-27 2017-12-27 Video data processing method and video networking cache server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711445197.XA CN108966038B (en) 2017-12-27 2017-12-27 Video data processing method and video networking cache server

Publications (2)

Publication Number Publication Date
CN108966038A CN108966038A (en) 2018-12-07
CN108966038B true CN108966038B (en) 2021-01-22

Family

ID=64495683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711445197.XA Active CN108966038B (en) 2017-12-27 2017-12-27 Video data processing method and video networking cache server

Country Status (1)

Country Link
CN (1) CN108966038B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833478A (en) * 2019-04-15 2020-10-27 丰鸟航空科技有限公司 Data processing method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1866905A (en) * 2005-05-17 2006-11-22 华为技术有限公司 Method and apparatus for shaping transmission service stream in network
CN101521813A (en) * 2009-04-17 2009-09-02 杭州华三通信技术有限公司 Method and device for processing media stream
CN103533451A (en) * 2013-09-30 2014-01-22 广州华多网络科技有限公司 Method and system for regulating jitter buffer
CN104918133A (en) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 Method and device for playing video streams in articulated naturality web
WO2017084311A1 (en) * 2015-11-18 2017-05-26 深圳Tcl新技术有限公司 Method and device for accelerating playing of single-fragment video
CN107277648A (en) * 2017-02-28 2017-10-20 大连理工大学 A kind of video transmission method of subway train LCD screen

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611322A (en) * 2014-11-14 2016-05-25 台湾艾特维股份有限公司 Video bandwidth adjustment device and adjustment method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1866905A (en) * 2005-05-17 2006-11-22 华为技术有限公司 Method and apparatus for shaping transmission service stream in network
CN101521813A (en) * 2009-04-17 2009-09-02 杭州华三通信技术有限公司 Method and device for processing media stream
CN103533451A (en) * 2013-09-30 2014-01-22 广州华多网络科技有限公司 Method and system for regulating jitter buffer
CN104918133A (en) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 Method and device for playing video streams in articulated naturality web
WO2017084311A1 (en) * 2015-11-18 2017-05-26 深圳Tcl新技术有限公司 Method and device for accelerating playing of single-fragment video
CN107277648A (en) * 2017-02-28 2017-10-20 大连理工大学 A kind of video transmission method of subway train LCD screen

Also Published As

Publication number Publication date
CN108966038A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN108989078B (en) Method and device for detecting node equipment fault in video network
CN109150905B (en) Video network resource release method and video network sharing platform server
CN109547163B (en) Method and device for controlling data transmission rate
CN108881948B (en) Method and system for video inspection network polling monitoring video
CN110049341B (en) Video processing method and device
CN109788235B (en) Video networking-based conference recording information processing method and system
CN109743284B (en) Video processing method and system based on video network
CN109714568B (en) Video monitoring data synchronization method and device
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110769179B (en) Audio and video data stream processing method and system
CN110913162A (en) Audio and video stream data processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110661992A (en) Data processing method and device
CN111212255B (en) Monitoring resource obtaining method and device and computer readable storage medium
CN110493149B (en) Message processing method and device
CN110446058B (en) Video acquisition method, system, device and computer readable storage medium
CN109889516B (en) Method and device for establishing session channel
CN110086773B (en) Audio and video data processing method and system
CN109769012B (en) Web server access method and device
CN111447396A (en) Audio and video transmission method and device, electronic equipment and storage medium
CN110049069B (en) Data acquisition method and device
CN110113555B (en) Video conference processing method and system based on video networking
CN108881148B (en) Data acquisition method and device
CN108966038B (en) Video data processing method and video networking cache server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 Gehua Building 1103, No. 1 Qinglong Hutong, Dongcheng District, Beijing

Applicant after: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 100000 Beijing city Dongcheng District Qinglong Hutong No. 1 Gehua building A1103-1113

Applicant before: BEIJING VISIONVERA INTERNATIONAL INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant