WO2019154221A1 - 发送流数据的方法及数据发送设备 - Google Patents

发送流数据的方法及数据发送设备 Download PDF

Info

Publication number
WO2019154221A1
WO2019154221A1 PCT/CN2019/073922 CN2019073922W WO2019154221A1 WO 2019154221 A1 WO2019154221 A1 WO 2019154221A1 CN 2019073922 W CN2019073922 W CN 2019073922W WO 2019154221 A1 WO2019154221 A1 WO 2019154221A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
queue
data block
frame
threshold
Prior art date
Application number
PCT/CN2019/073922
Other languages
English (en)
French (fr)
Inventor
贾达夫•拉胡尔•阿尔温德
曹振
萨尔玛•K•安莫尔•曼尼•特杰斯瓦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019154221A1 publication Critical patent/WO2019154221A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2401Monitoring of the client buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols

Definitions

  • the embodiments of the present invention relate to the field of communications technologies, and in particular, to a method for sending stream data and a data sending device.
  • Streaming is divided into two types: progressive streaming (Regressive Streaming) transmission and real-time streaming (Realtime Streaming) transmission.
  • the data transmitted by streaming can be called data stream, which is widely used in, for example, audio and video. , network monitoring, online games, financial services and other scenarios.
  • Sequential streaming is a sequential download, for example, while downloading an audio and video file, the user can view the portion of the audio and video that has been downloaded.
  • Real-time streaming is always transmitted in real time, such as live live broadcast. In this case, audio and video often need to be viewed in real time. Therefore, real-time streaming cannot be delayed or over-buffered, requiring the application to immediately process the received data and present it to the user.
  • Real-time streaming usage scenarios include: live webcasting, remote desktop sharing, video surveillance, and video telephony.
  • live streaming to transfer audio and video files
  • users can play files while downloading, and can watch or listen without downloading complete audio and video files, which can save minutes or even hours of downloading time. Will decrease.
  • TCP Transmission Control Protocol
  • the application that sends the streaming data (specifically, the real-time streaming media application) sends the data block of the streaming data to the kernel mode cache, and the data block is located in the kernel state cache until the sender confirms. After the receiving end successfully receives the data block, the data block is deleted from the kernel state cache of the transmitting end.
  • Such a processing method has a long queue delay (refers to the time queued in the kernel-state transmission queue before the message is sent), and also lengthens the overall delay of the stream data transmission, and the stream data is obviously a pair.
  • the real-time requirements of transmission are high, and applications with low transmission delay tolerance make it impossible to satisfy streaming data by TCP protocol transmission.
  • the bandwidth is 10 Mbps
  • the data stream of the video stream to be transmitted is 4 MB. Since 4 MB is equal to 32 Mb, the queue delay of the data stream of the video stream in the kernel mode cache may be 3.2 s. This obviously has a huge impact on the real-time nature of data transmission. Therefore, the existing transport stream data of the TCP protocol transmission method cannot meet the requirements of the transmission delay of the stream data.
  • the embodiment of the present application provides a method for transmitting stream data and a data sending device, which can effectively reduce the accumulation of stream data in the kernel state, reduce the transmission delay of the stream data, and improve the real-time performance of the stream data transmitted by using the TCP protocol.
  • the embodiment of the present application provides a method for transmitting stream data, where the method is applied to a data sending end of a TCP connection of a transmission control protocol, and an application is run in an operating system of the data sending end, and the method includes: And storing the data block delivered by the application into the first queue, where the data block is stream data, where the first queue is a queue in a user state of an operating system of the data sending end, and the first queue is used for Placing a data block of the stream data to be sent; if the amount of data in the second queue meets a preset condition, adding at least one data block in the first queue to the second queue, the second queue In the kernel mode of the operating system of the data sending end, a sending buffer queue corresponding to the TCP protocol; and the data sending end sends data to the data receiving end of the TCP connection through the second queue.
  • At least one data block in the first queue is added to the second block by storing the data block delivered by the application in the first queue, and then the data amount in the second queue meets the preset condition.
  • the queue can effectively reduce the accumulation of data blocks in the second queue, thereby effectively reducing the transmission delay of the stream data, improving the real-time performance of the data stream transmitted by using the TCP protocol, and improving the efficiency of stream data transmission.
  • the preset condition is: the amount of data in the second queue does not exceed a second threshold, or the occupancy ratio of the amount of data in the second queue does not exceed a third threshold.
  • the method further includes: when the amount of data in the second queue exceeds the second threshold, or the occupancy ratio of the amount of data in the second queue If the third threshold is exceeded, the data block with lower priority in the first queue is discarded.
  • the data block in the second queue may not be sent out in time, resulting in too much data in the second queue, and the data is increased.
  • the queuing delay of the block in the kernel mode by using the method provided in the embodiment of the present application, the data block with higher priority can be preferentially added to the second queue, and the data block with lower priority is discarded, thereby maximizing the satisfaction.
  • the requirement of the transmission delay of the stream data reduces the transmission delay of the stream data.
  • the method further includes: reducing a rate of adding the data block from the first queue to the second queue.
  • the rate of adding a data block from the first queue to the second queue is variable, the second queue can be effectively avoided by reducing the rate at which the data block is added to the second queue from the first queue. There are too many data blocks to deposit the phenomenon that causes the application to be stuck.
  • the method further includes: suspending adding at least one data block in the first queue The second queue is continued until the at least one data block in the first queue is added to the second queue until the amount of data in the second queue does not exceed the second threshold.
  • the method further includes: suspending at least one data in the first queue Adding a block to the second queue until the occupancy ratio of the amount of data in the second queue does not exceed the third threshold, and continuing to perform the adding at least one data block in the first queue to the second queue.
  • the rate of adding the data block from the first queue to the second queue is unchanged, the data in the second queue can be effectively avoided by temporarily adding the data block in the first queue to the second queue. Too many blocks increase the transmission delay of the data block.
  • the method further includes: discarding a lower priority data block in the first queue if the amount of data in the first queue exceeds a first threshold.
  • the data block sent by the application is a data block of a video stream
  • the data block of the video stream includes a bidirectional predictive coding B frame, an inter prediction coding P frame, and an intra coded I frame.
  • the data block of the video stream includes consecutive frames that are not other frames. Referring to, the first frame of the consecutive multiple frames has the lowest priority.
  • the priority of the B frames is lower than the priority of the P frames
  • the priority of the P frames is lower than the priority of the I frames. level.
  • the application is a streaming media application, where the data block sent by the application is a data block of the video stream; the first threshold is a video bit rate, a rate of the TCP connection, The amount of data in the first queue and the threshold determined by the first adjustment parameter, the first adjustment parameter being a duration of a data block of the video stream that the TCP connection can buffer; the second threshold being the video The bit rate and a threshold determined by the second adjustment parameter, the second adjustment parameter being a delay parameter for characterizing a degree of delay that the streaming application can tolerate.
  • the first threshold is set by the video bit rate, the rate of the TCP connection, the amount of data in the first queue, and the first adjustment parameter, thereby avoiding that the first threshold is set to a fixed value, and the method cannot be effectively utilized.
  • the storing the data block sent by the application into the first queue includes: storing the data block delivered by the application into the first by calling a target application programming interface API queue.
  • the storing the data block delivered by the application into the first queue includes: storing, by the proxy of the application, the data block delivered by the application into the first queue,
  • the agent is a process running in the user mode of the operating system.
  • the embodiment of the present application further provides a data sending device, where the data sending device is a device that applies a transmission control protocol TCP, and an application is run in an operating system of the data sending device, where the data sending device includes a storage unit, configured to store a data block delivered by the application into a first queue, where the data block is stream data, where the first queue is a queue in a user state of an operating system of the data sending device
  • the first queue is configured to place a data block of the stream data to be sent
  • the adding unit is configured to: when the amount of data in the second queue meets a preset condition, at least the first queue a data block is added to the second queue, the second queue is a send buffer queue corresponding to the TCP protocol in a kernel state of an operating system of the data sending device, and a sending unit is configured to pass the second queue to
  • TCP-connected data receiving device transmits data.
  • the preset condition is: the amount of data in the second queue does not exceed a second threshold, or the occupancy ratio of the amount of data in the second queue does not exceed a third threshold.
  • the data sending device further includes: a discarding unit, configured to: when the amount of data in the second queue exceeds the second threshold, or in the second queue In the case where the occupancy ratio of the data amount in the middle exceeds the third threshold, the data block having the lower priority in the first queue is discarded.
  • a discarding unit configured to: when the amount of data in the second queue exceeds the second threshold, or in the second queue In the case where the occupancy ratio of the data amount in the middle exceeds the third threshold, the data block having the lower priority in the first queue is discarded.
  • the data sending device further includes: a rate reducing unit, configured to: when the amount of data in the second queue exceeds the second threshold, or in the second In the case where the occupancy ratio of the amount of data in the queue exceeds the third threshold, the rate at which the data block is added from the first queue to the second queue is reduced.
  • a rate reducing unit configured to: when the amount of data in the second queue exceeds the second threshold, or in the second In the case where the occupancy ratio of the amount of data in the queue exceeds the third threshold, the rate at which the data block is added from the first queue to the second queue is reduced.
  • the data sending device further includes: a suspending unit, configured to suspend the first queue if the amount of data in the second queue exceeds the second threshold Adding at least one data block to the second queue; the joining unit is further configured to continue to execute the first queue if the amount of data in the second queue does not exceed the second threshold At least one of the data blocks is added to the second queue.
  • a suspending unit configured to suspend the first queue if the amount of data in the second queue exceeds the second threshold Adding at least one data block to the second queue
  • the joining unit is further configured to continue to execute the first queue if the amount of data in the second queue does not exceed the second threshold At least one of the data blocks is added to the second queue.
  • the data sending device further includes: a suspending unit, configured to suspend the first in case that an occupancy ratio of the data amount in the second queue exceeds the third threshold At least one data block in a queue is added to the second queue; the joining unit is further configured to continue to execute if the occupancy ratio of the data volume in the second queue does not exceed the third threshold Adding at least one data block in the first queue to the second queue.
  • a suspending unit configured to suspend the first in case that an occupancy ratio of the data amount in the second queue exceeds the third threshold At least one data block in a queue is added to the second queue; the joining unit is further configured to continue to execute if the occupancy ratio of the data volume in the second queue does not exceed the third threshold Adding at least one data block in the first queue to the second queue.
  • the discarding unit is further configured to discard the data block with a lower priority in the first queue if the amount of data in the first queue exceeds a first threshold.
  • the data block sent by the application is a data block of a video stream
  • the data block of the video stream includes a bidirectional predictive coding B frame, an inter prediction coding P frame, and an intra coded I frame.
  • the data block of the video stream includes consecutive frames that are not other frames. Referring to, the first frame of the consecutive multiple frames has the lowest priority.
  • the priority of the B frames is lower than the priority of the P frames
  • the priority of the P frames is lower than the priority of the I frames. level.
  • the application is a streaming media application, where the data block sent by the application is a data block of the video stream; the first threshold is a video bit rate, a rate of the TCP connection, The amount of data in the first queue and the threshold determined by the first adjustment parameter, the first adjustment parameter being a duration of a data block of the video stream that the TCP connection can buffer; the second threshold being the video The bit rate and a threshold determined by the second adjustment parameter, the second adjustment parameter being a delay parameter for characterizing a degree of delay that the streaming application can tolerate.
  • the depositing unit is configured to store the data block delivered by the application into the first queue by calling a target application programming interface API.
  • the depositing unit is configured to store, by the proxy of the application, a data block delivered by the application into the first queue, where the proxy is running in the operation A process of the user state of the system.
  • the embodiment of the present application further provides a data sending device, where the data sending device is a device that applies a transmission control protocol TCP, and an application is run in an operating system of the data sending device, where the data sending device includes a processing circuit, a storage medium, and a transceiver; wherein the processing circuit, the storage medium, and the transceiver are interconnected by a line in which program instructions are stored; the program instructions are executed by the processing circuit And causing the processing circuit to: store the data block delivered by the application into the first queue, the data block is stream data, and the first queue is a user state of an operating system of the data sending end a queue in which the first queue is used to place a data block of stream data to be sent; and if the amount of data in the second queue satisfies a preset condition, at least one data block in the first queue is added In the second queue, the second queue is a sending buffer queue corresponding to the TCP protocol in a kernel state of an operating system of the data sending end;
  • the preset condition is: the amount of data in the second queue does not exceed a second threshold, or the occupancy ratio of the amount of data in the second queue does not exceed a third threshold.
  • the processing circuit is further configured to: when the amount of data in the second queue exceeds the second threshold, or the amount of data in the second queue When the occupancy ratio exceeds the third threshold, the data block with lower priority in the first queue is discarded.
  • the processing circuit is further configured to: when the amount of data in the second queue exceeds the second threshold, or the amount of data in the second queue If the occupancy ratio exceeds the third threshold, the rate at which data blocks are added from the first queue to the second queue is reduced.
  • the processing circuit is further configured to suspend at least one data block in the first queue if the amount of data in the second queue exceeds the second threshold Joining the second queue until the amount of data in the second queue does not exceed the second threshold, and continuing to perform the adding at least one data block in the first queue to the second queue.
  • the processing circuit is further configured to suspend at least the first queue if the occupancy ratio of the data volume in the second queue exceeds the third threshold. Adding a data block to the second queue until the occupancy ratio of the amount of data in the second queue does not exceed the third threshold, and continuing to perform the adding at least one data block in the first queue to the Second queue.
  • the processing circuit is further configured to discard the data block with a lower priority in the first queue if the amount of data in the first queue exceeds a first threshold.
  • the data block sent by the application is a data block of a video stream
  • the data block of the video stream includes a bidirectional predictive coding B frame, an inter prediction coding P frame, and an intra coded I frame.
  • the data block of the video stream includes consecutive frames that are not other frames. Referring to, the first frame of the consecutive multiple frames has the lowest priority.
  • the priority of the B frames is lower than the priority of the P frames
  • the priority of the P frames is lower than the priority of the I frames. level.
  • the application is a streaming media application, where the data block sent by the application is a data block of the video stream; the first threshold is a video bit rate, a rate of the TCP connection, The amount of data in the first queue and the threshold determined by the first adjustment parameter, the first adjustment parameter being a duration of a data block of the video stream that the TCP connection can buffer; the second threshold being the video The bit rate and a threshold determined by the second adjustment parameter, the second adjustment parameter being a delay parameter for characterizing a degree of delay that the streaming application can tolerate.
  • the processing circuit is specifically configured to store the data block delivered by the application into the first queue by calling a target application programming interface API.
  • the processing circuit is configured to store, by using a proxy of the application, a data block that is sent by the application into the first queue, where the proxy is running in the operating system.
  • a proxy of the application a data block that is sent by the application into the first queue, where the proxy is running in the operating system.
  • an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, and the program instructions are processed by a data transmitting device.
  • the processing circuitry of the data transmitting device is caused to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
  • the embodiment of the present application further provides a computer program product, which enables any one of the first aspect or the first aspect when the computer program product is run on the data sending device.
  • FIG. 1A is a schematic diagram of a scenario of remote desktop sharing provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a video surveillance scenario according to an embodiment of the present disclosure.
  • FIG. 2A is a schematic diagram of video codec provided by an embodiment of the present application.
  • 2B is a schematic structural diagram of a system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for sending stream data according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a data optimization architecture provided by an embodiment of the present application.
  • FIG. 5A is a schematic diagram of a data optimization architecture provided by an embodiment of the present application.
  • FIG. 5B is a schematic diagram of another data optimization architecture provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart diagram of another method for sending stream data according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart diagram of still another method for sending stream data according to an embodiment of the present application.
  • FIG. 8A is a schematic diagram of a process for processing stream data by a real-time video capture device according to an embodiment of the present application.
  • FIG. 8B is a schematic diagram of a specific scenario of a method for sending stream data according to an embodiment of the present application.
  • 8C is a schematic diagram of a process of processing stream data by a remote control device according to an embodiment of the present application.
  • 9A is a schematic structural diagram of a data sending device according to an embodiment of the present application.
  • FIG. 9B is a schematic structural diagram of another data sending device according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of still another data sending device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • Kernel state a process running in the kernel address space, or a process running in the kernel address space is in a kernel state; in the embodiment of the present application, it is said to be in the kernel state of the operating system, and the sending buffer queue corresponding to the TCP protocol is Second queue. It should be understood that in one implementation, one TCP connection corresponds to one transmit buffer queue.
  • the user mode the process running in the user address space, or the process running in the user address space is in the user mode.
  • the queue in the user mode of the operating system is the first queue.
  • Data block a group or groups of records that are consecutively arranged in sequence; in the embodiment of the present application, the data block is a data block of stream data, wherein the data transmitted by the stream transmission is called a data stream. Therefore, the data blocks in this application should not be construed as limiting.
  • I frame also known as an intra-frame coded frame, is an independent frame with all the information. It can be decoded independently without reference to other frames. It can be simply understood as a static picture.
  • the first frame in the video sequence is always an I frame, or I frame can also be called a key frame.
  • P (Predicted) frame A P frame, also known as an interframe predictive coding frame, needs to be referenced to the previous I frame for encoding. Indicates the difference between the current frame picture and the previous frame (the previous frame may be an I frame or a P frame). When decoding, it is necessary to superimpose the difference defined by this frame with the previously buffered picture to generate the final picture.
  • P frames usually occupy fewer data bits, but the disadvantage is that P frames are very sensitive to transmission errors because of their complex tolerance to the previous P reference frames and I reference frames.
  • B frame is also called bidirectional predictive coded frame, and B frame records the difference between this frame and the previous frame. That is to say, in order to decode the B frame, not only the previously buffered picture but also the decoded picture is decoded, and the final picture is obtained by superimposing the previous and subsequent pictures with the current frame data.
  • B frame compression rate is high, but the decoding performance is high.
  • the video bit rate is the network bit rate required for normal playback of a video stream.
  • the video bit rate may be a constant bit rate (CBR) or a variable bit rate or the like.
  • CBR constant bit rate
  • VBR variable bit rate
  • the video bit rate is determined based on the video stream data at the time of compression, which is a method of taking into consideration the file size on the premise of quality.
  • the streaming transmission is divided into sequential streaming and real-time streaming.
  • the user can watch while downloading.
  • the video can be downloaded through the download control.
  • the user can also watch the video while the terminal device downloads the video.
  • Real-time streaming is always transmitted in real time, such as live live broadcast.
  • the live streaming scenario includes at least: live web video, remote desktop sharing, video surveillance, and video telephony.
  • 1A is a schematic diagram of a remote desktop sharing scenario in which the terminal device 120 is connected to other remote terminals and displays the desktop of the remote terminal to the terminal device 120.
  • the desktop of the remote terminal needs to be displayed to the terminal device 120 in real time, so the communication needs to be real-time and is based on TCP transmission.
  • FIG. 1B is a schematic diagram of a video surveillance scenario provided by an embodiment of the present application.
  • FIG. 1B shows a typical video real-time streaming, such as a video surveillance scenario, in which a user can connect to a camera, an application, or a browser through an application or browser in the smartphone 130. Go to the Hyper Text Transfer Protocol (HTTP) server on the camera and obtain the device list to obtain the monitoring device, so that the user can obtain the real-time video obtained by the monitoring device, and the real-time video stream acquired by the monitoring device Data blocks are real-time, based on TCP transmissions.
  • HTTP Hyper Text Transfer Protocol
  • FIG. 1A and FIG. 1B are merely examples, and should not be construed as limiting.
  • the remote terminal in FIG. 1A and the monitoring device in FIG. 1B can be understood as the data transmitting device in the embodiment of the present application.
  • Compression Most popular video codecs allow for spatial (intra) compression as well as temporal (inter) compression.
  • the compression algorithm may change, but all video codecs follow a similar compression scheme.
  • the encoder encodes multiple images to generate a segment of Group Of Pictures (GOP).
  • GOP Group Of Pictures
  • the decoder reads a segment of the GOP for decoding and then renders it.
  • a GOP is a set of consecutive pictures consisting of an I frame and several B/P frames. It is the basic unit of video image encoder and decoder access. Its order of repetition will repeat until the end of the video.
  • the I frame is an internally coded frame (also referred to as a key frame)
  • the P frame is a forward predicted frame (forward reference frame)
  • the B frame is a bidirectionally interpolated frame (bidirectional reference frame). Simply put, the I frame is a complete picture, while the P and B frames record changes relative to the I frame.
  • FIG. 2A is a schematic diagram of video codec provided by an embodiment of the present application. It can be understood that the video stream codec shown in FIG. 2A is only an example, and should not be construed as limiting the embodiments of the present application.
  • TCP Transmission Control Protocol
  • TCP transport protocol for streaming examples
  • Remote desktop sharing uses the Remote Frame Buffer (RFB) protocol to transmit over the TCP protocol.
  • RTB Remote Frame Buffer
  • remote desktop sharing applications RealVNC, TightVNC, TigerVNC, TeamViewer.
  • Video surveillance Watch real-time video surveillance applications in the Hyper Text Transfer Protocol (HTTP) browser.
  • HTTP Hyper Text Transfer Protocol
  • video call Although most video calls use User Datagram Protocol (UDP) as the transport layer protocol, but in the case of Network Address Translation (NAT) to block UDP traffic, real-time streaming applications Will fall back to using the TCP protocol.
  • UDP User Datagram Protocol
  • NAT Network Address Translation
  • HLS HTTP Live Streaming
  • DASH Dynamic Adaptive Streaming over HTTP
  • RTMP Real Time Messaging Protocol
  • the kernel state cache overuse causes high latency, and the default TCP kernel state cache size in the Linux kernel is about 4MB.
  • caching can effectively reduce the interaction between applications and Linux systems and increase throughput.
  • real-time services such as real-time streaming, too large caches will cause long queue delays and have a certain negative effect.
  • the impact of the wireless link if the network link is not good, then the data to be transmitted will start to cache and queue in the kernel state cache, waiting to be sent. If the data to be transmitted is real-time streaming data, the buffering of the real-time streaming data will introduce a delay, thereby affecting the user experience, such as causing a jam, affecting user viewing, and the like.
  • FIG. 2B is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • Socket is the meaning of the port, that is, the establishment of the network communication connection requires at least a pair of port numbers.
  • the socket is essentially an application programming interface (API).
  • API application programming interface
  • the library needs to be called (lib). )
  • the API is a function library, which is used to define the operation logic of the function, the user mode runs the application. Therefore, the socket connects the user state and the kernel state.
  • Socket Application an application for developing network communication using a socket programming interface
  • Socket API the socket programming interface
  • Socket interface implementation an internal implementation for a socket programming interface
  • Protocol implementation of transport layer and network layer protocol implementation for transport layer and network layer
  • Data link layer protocol implementation used for data link layer protocol implementation, such as Ethernet protocol or WIFI protocol;
  • an application in the data sending device calls a socket API, and a socket is created through the socket API, thereby establishing a TCP connection with the data receiving device through the protocol stack, and after the TCP connection is established, the data is sent.
  • the device sends stream data or receives stream data through the TCP connection.
  • the embodiment of the present application provides a data processing method, and the data processing method provided by the embodiment of the present application is specifically described below.
  • FIG. 3 is a schematic flowchart diagram of a method for sending stream data according to an embodiment of the present disclosure.
  • the method for sending stream data may be applied to a data sending device, where the data sending device may include a mobile phone, a tablet computer, and a notebook computer.
  • the data transmitting device may further include a webcam, a media live client, and the like, and the data transmitting device may further include collecting The server of the video information and the network intermediate device that transmits the real-time stream data, and the like, the embodiment of the present application does not limit the data transmitting device.
  • the method for sending the stream data is applied to the data sending end of the TCP connection, and the application running in the operating system of the data sending end is an application, and the application running in the operating system of the data sending end may include a streaming media application, etc.
  • the example is not limited.
  • the data sending end is the data sending device in the embodiment of the present application.
  • the method for transmitting stream data includes at least the following steps:
  • the data block sent by the application is stored in the first queue, where the data block is stream data, and the first queue is a queue in a user state of an operating system of the data sending end, where the first queue is used to place a to-be-sent Data block of stream data;
  • the stream data may include video stream data.
  • the stream data may be video stream data collected by the camera.
  • the stream data may further include image stream data, such as the image stream data collected by the data transmitting device if the live streaming scene is remote desktop sharing. That is to say, in the embodiment of the present application, the data block may be a data block of the video stream, or may be a data block of the image stream data or the like.
  • the transmission buffer queue corresponding to the TCP protocol
  • the preset condition is a condition related to the amount of data in the second queue. Specifically, the preset condition is: the amount of data in the second queue does not exceed the second threshold, or the data in the second queue. The occupancy ratio of the quantity does not exceed the third threshold.
  • the second threshold is a dynamic measure of the data volume threshold in the second queue.
  • the second threshold may also be referred to as a dynamic threshold for measuring the amount of data in the second queue.
  • the second threshold may be a threshold determined by a video bit rate and a second adjustment parameter, and the second adjustment parameter is a delay parameter.
  • the second adjustment parameter can be used to characterize the degree of delay that the streaming application can tolerate. That is to say, the second adjustment parameter is a delay parameter that the streaming application can tolerate.
  • the different delays may be different for different streaming media applications. Therefore, different streaming media applications may correspond to different second tuning parameters. Accordingly, different streaming media applications may also correspond to the same second tuning parameter.
  • the embodiment is not limited.
  • the embodiment of the present application does not limit how the second adjustment parameter is set and the second adjustment parameter.
  • the second adjustment parameter may be determined according to the time when the user can obviously feel the stuck, and the second adjustment parameter may also be determined according to the empirical value or the like.
  • the third threshold may be used to measure the occupancy ratio of the amount of data in the second queue. If the third threshold is 80%, the amount of data in the second queue is 80% of the total amount of data in the second queue. .
  • the third threshold may be other values, which is not limited in this embodiment.
  • the data sending end sends data to the data receiving end of the TCP connection by using the second queue.
  • the data block delivered by the application when the data block delivered by the application is stored in the first queue, and then the data amount in the second queue satisfies the preset condition, at least one data block in the first queue is added to the first
  • the second queue can effectively reduce the accumulation of data blocks in the second queue, avoiding the excessive amount of data in the second queue and failing to deposit the phenomenon of causing the card, and effectively reducing the transmission delay of the data and improving the use.
  • the real-time nature of the TCP protocol transmission data improves the efficiency of data transmission.
  • the embodiment of the present application further provides a method for sending stream data, which may
  • the data block with higher priority is guaranteed to be sent preferentially, that is, the data block with higher priority in the first queue is reserved, and some data blocks with lower priority are discarded. Therefore, based on the data processing method described in FIG. 3, the foregoing method further includes:
  • the data block with the lower priority may be the data block with the lowest priority in the first queue, or the data block other than the data block with the highest priority in the first queue, etc.
  • the example does not limit how many data blocks are discarded.
  • the data block sent by the foregoing application is a data block of the video stream
  • the data block of the video stream includes a B frame, a P frame, and an I frame
  • the embodiment of the present application further provides a determining data.
  • the method of priority of the block is as follows:
  • the priority may be determined according to the number of times of reference, or may be determined according to the lowest priority of the first frame in consecutive multiple frames if the continuous multiple frames are not referenced by other frames.
  • the two embodiments are not limited by the embodiment of the present application.
  • the embodiment of the present application may use the number of times between frames and frames to measure the priority, that is, the more times the frames are referenced by other frames, the higher the priority. Under the priority rule, frames that are not referenced by other frames have the lowest priority. If the data block of the video stream contains consecutive frames that are not referenced by other frames, the consecutive multiple frames have the lowest priority. Continuous multi-frames are likely to result in a degradation of video playback quality. Therefore, if the data block of the video stream contains consecutive frames that are not referenced by other frames, the first frame of the consecutive multi-frames has the lowest priority.
  • the same priority may occur. Therefore, when the B frame, the P frame, and the I frame are referenced by other frames are the same. Or, in the case that the B frame, the P frame, and the I frame are respectively the first frame of a group of consecutive multiple frames,
  • the priority of the B frame is lower than the priority of the P frame, and the priority of the P frame is lower than the priority of the I frame.
  • the data sending device may initially allocate a priority for each frame of stream data, and then assign a priority according to the foregoing priority rule, that is, if the frame is referenced by other frames, the priority of the frame is increased (eg, 0.1), and the frame is increased.
  • the priority can be proportional to the number of references to other frames.
  • the data sending device stores the data block sent by the application into the first queue
  • the data is assigned a priority for each frame of data, or when the data block sent by the application is stored in the first queue,
  • the frame data is assigned a priority or the like.
  • the embodiment of the present application does not uniquely define when the data transmitting device assigns a priority.
  • the amount of data in the second queue may be many, and the data block with higher priority may be used by using the method provided in this embodiment of the present application. Send out and discard the lower priority data block, thus ensuring the best communication quality and reducing the queuing time of the data in the kernel-state send queue.
  • the network quality is not good or the network quality changes drastically.
  • the amount of data in the second queue may be enough, and then the first queue is The data block is added to the second queue, which seriously affects the queuing time of the data block in the sending queue of the kernel mode. Therefore, based on the method for sending the stream data, the embodiment of the present application further provides a sending stream data. The method is as follows:
  • the method further includes:
  • the rate at which the data block is added to the second queue from the first queue is reduced.
  • the method further includes:
  • the foregoing method further includes:
  • the data transmitting device can lower the rate, thereby further reducing the data blocks that are added to the second queue.
  • a threshold may be set to measure the amount of data in the second queue, such as the second threshold or the third threshold, or the second queue may be simultaneously measured by the second threshold and the third threshold. The amount of data in.
  • the data transmitting device can lower the first queue as long as the amount of data in the second queue satisfies one of the conditions.
  • the data sending device sends the data in the first queue at a normal rate.
  • the block joins the second queue.
  • the data sending device may stop adding the data block from the first queue to the second queue. The queue, until the amount of data in the second queue does not exceed the second threshold, the data transmitting device can continue to join the data block in the first queue to the second queue.
  • the data sending device may stop adding the data block from the first queue to the second queue until the data volume in the second queue is occupied. If the ratio does not exceed the third threshold, the data transmitting device may continue to join the data block in the first queue to the second queue.
  • only one threshold such as a second threshold or a third threshold, may be set, or the second threshold and the third threshold may be simultaneously set to measure the data amount of the second queue, etc., the present application.
  • the embodiment is not limited. For details, refer to the above description, and details are not described herein again.
  • the embodiment of the present application can effectively prevent the data block in the second queue from being too large to be stored, which causes the application to be stuck; and effectively avoids too many data blocks in the second queue and increases the transmission delay of the data block. happening.
  • the embodiment of the present application further provides a sending flow data.
  • the method is as follows:
  • the data block with the lower priority in the first queue is discarded.
  • the first threshold is a dynamic data threshold for measuring the first queue, and the first threshold may be determined by a video bit rate, a rate of the TCP connection, an amount of data in the first queue, and a first adjustment parameter.
  • the threshold, the first adjustment parameter is a duration of a data block of the video stream that can be buffered by the TCP connection.
  • the different media application may correspond to different first adjustment parameters.
  • different streaming media applications may also correspond to the same first adjustment parameter, which is not limited in this embodiment.
  • the present application does not limit the calculation method of the first threshold value and the second threshold value described below, or the value calculation rule.
  • the specific calculation manners described in the present application are merely illustrative.
  • the TCP link rate link-rate For example, according to the video bit rate VBR, the TCP link rate link-rate, an adjustable parameter beta, and the first queue data amount (user-space-bufsize) to comprehensively determine the current data volume threshold of the first queue.
  • the first threshold in the embodiment of the present application may be calculated by using min[max(VBR, link-rate)*beta, user-space-bufsize], where the first threshold reflects the maximum buffering under the current link-rate.
  • the data sending device may determine, according to the amount of data in the first queue, whether to discard the first a data block with a lower priority in the queue; or, after the data sending device stores the data block delivered by the application into the first queue, the data sending device can be based on the amount of data in the first queue and the second queue. The amount of data is used to determine whether to discard the lower priority data blocks in the first queue.
  • the amount of data in the second queue may be very large, and the amount of data in the first queue may also be very large, and is implemented by using the present application.
  • the data block with higher priority is sent out, and the data block with lower priority is discarded, so that the most important data block can be effectively transmitted, thereby ensuring the communication quality and the communication quality is minimized.
  • FIG. 4 is a schematic diagram of a data optimization architecture according to an embodiment of the present disclosure.
  • the data optimization architecture may be applied to a data sending device, and specifically, may be applied to a method for transmitting stream data.
  • the data optimization architecture 400 may include at least:
  • Joint scheduling module 401 user state cache module 402 and kernel state cache control module 403;
  • the joint scheduling module 401 is configured to obtain a data block that is sent by the streaming media application, and then the data block passes through the joint scheduling module 401, and the data block is stored in the first queue.
  • the data block may be controlled to be stored in the user state cache module 402; and the joint scheduling module 401 may also be in the case where the amount of data in the second queue exceeds a second threshold, or the amount of data in the second queue is occupied. If the ratio exceeds the third threshold, the control discards the data block with the lower priority in the first queue; or the joint scheduling module 401 may also control if the amount of data in the first queue exceeds the first threshold. The data block with the lower priority in the first queue is discarded.
  • the joint scheduling module 401 can control the data block to be directly stored in the second queue or the like if the data block cannot be stored in the first queue.
  • the user mode cache module 402 can also be understood as a management module based on the priority user mode cache.
  • the data block enters the user state cache module 402 via the joint scheduling module 401, and is scheduled by the user state cache module 402, and then according to the priority.
  • the policy transfers data blocks.
  • the kernel state cache control module 403 is configured to dynamically adjust the data volume threshold of the second queue.
  • the data volume threshold of the second queue may be set according to the method described in FIG. If the real-time performance of the streaming media application is high, the threshold of the data volume in the second queue may be set to a second threshold. If the real-time performance requirement of the streaming media application is low, the second queue is in the second queue.
  • the data amount threshold such as the second threshold, can be set larger.
  • the kernel mode cache control module 403 can also control adding the data block from the first queue to the second queue.
  • the data optimization architecture 400 in the embodiment of the present application is deployed in a user mode, independent of the operating system kernel.
  • the streaming application completes the acquisition of the data block, and before the data block is sent over the TCP connection.
  • the data transmitting device can send the data block of the streaming data through the TCP connection.
  • the data block of the stream data may sequentially pass through the joint scheduling module 401 and the user state cache module 402 in the data optimization architecture 400, thereby passing the user state (eg, The protocol stack in Figure 2B implements a data receiving device that is sent to the TCP connection.
  • the data optimization architecture 400 acquires a data buffer sent by the streaming media application, establishes a data cache of the user state, and analyzes and processes the data.
  • the time that the data blocks are queued in the kernel-state transmission queue is reduced, and the user experience is improved.
  • the embodiment of the present application uses two-level buffer management of the user state cache and the kernel state cache, which allows the data block to be buffered in the user state cache and the data block to be optimized in the user state cache. It can be understood that any other streaming media can replace the video stream data to obtain a similar effect.
  • stream data of a control command that is also sent while the video stream data is being sent.
  • the stream data of these control commands can be buffered in the user mode to sort and discard the data according to the priority when the network is deteriorated.
  • the embodiment of the present application provides two architecture deployment manners, as shown in FIG. 5A and FIG. 5B.
  • the first architecture deployment method see Figure 5A, through the library (dynamic library or static library) and API.
  • the streaming media application directly sends the data block to the data optimization architecture 400 by calling the API of the data optimization architecture 400 provided by the embodiment of the present application.
  • the data optimization block 400 processes the data block, it is sent to the operating system kernel.
  • the deployment in this case requires the application to replace the original system call. That is, the streaming application can be developed by the developer of the data optimization architecture 400 when using the deployment approach provided by FIG. 5A.
  • the storing the data block delivered by the application into the first queue includes:
  • the data block delivered by the above application is stored in the first queue by calling the target application programming interface API.
  • the second architecture deployment method see Figure 5B, through the proxy method.
  • the data optimization architecture 400 in the embodiment of the present application runs as a separate process in the user state, and by configuring the proxy of the application (for example, using iptables to import all the data blocks sent by the streaming media application to the specified process), All data blocks are intercepted into the data optimization architecture 400 in the embodiment of the present application, and after the processing is completed, the data blocks are sent to the operating system kernel.
  • the deployment in this case does not require modification of the original application to be used. That is to say, when using the deployment mode provided by FIG. 5B, the streaming media application may not be developed by the developer of the data optimization architecture 400.
  • the storing the data block sent by the application into the first queue includes:
  • the data block delivered by the application is stored in the first queue by the proxy of the application, and the proxy is a process running in the user state of the operating system.
  • FIG. 6 is a schematic flowchart diagram of another method for sending stream data according to an embodiment of the present application.
  • the data sending device stores the data block delivered by the application into the first queue.
  • the data block is stream data
  • the first queue is a queue in a user state of an operating system of the data sending device, where the first queue is used to place a data block of the stream data to be sent.
  • the data sending device may not only discard the data block with lower priority in the first queue, but the data sending device may also reduce the data block from The rate at which the first queue joins the second queue, or the data sending device may also pause to add at least one data block in the first queue to the second queue until the amount of data in the second queue does not exceed the second threshold, and then continue to execute. At least one data block in the first queue is added to the second queue.
  • the data transmitting device may perform the steps of discarding the lower priority data block in the first queue and performing the step of reducing the rate of adding the data block from the first queue to the second queue, or the data transmitting device may also The step of discarding the lower priority data block in the first queue and the step of reducing the rate of adding the data block from the first queue to the second queue are not limited in this embodiment.
  • the embodiment of the present application further provides a method for sending stream data, as follows:
  • FIG. 7 is a schematic flowchart of still another method for transmitting stream data according to an embodiment of the present application.
  • the method for transmitting stream data is further obtained based on FIG. 3.
  • the method for transmitting stream data includes at least the following steps:
  • the data sending device acquires a data block that is sent by the application, where the data block is stream data.
  • the data sending device may further acquire the amount of data in the first queue, and then detect the data in the first queue. Whether the quantity exceeds the first threshold.
  • the data sending device may detect the amount of data in the first queue in real time or at a fixed frequency, or may detect the data in the first queue if the data sending device obtains the data block sent by the application. Quantity and so on. In the case that the data sending device obtains the data block sent by the application, the data sending device detects whether the amount of data in the first queue exceeds the first threshold, thereby reducing power consumption of the data sending device, avoiding real-time detection or fixing. Frequency detection consumes the power consumption of the data transmitting device.
  • the first queue is a queue in a user mode of an operating system of the data sending device, and the first queue is used to place a data block of the stream data to be sent.
  • the data sending device may further acquire the amount of data in the second queue, and then detect the data in the second queue. Whether the amount exceeds the second threshold.
  • the data sending device may also reduce the rate of adding the data block from the first queue to the second queue, or the data sending device may further At least one data block in the first queue may be suspended from the second queue until the amount of data in the second queue does not exceed the second threshold, and execution of at least one data block in the first queue is continued to be added to the second queue.
  • the data sending device may further suspend at least one data block in the first queue to the second queue until the amount of data in the second queue does not exceed the second threshold, and the amount of data in the first queue does not exceed a first threshold, the continuation of the at least one data block in the first queue is added to the second queue, specifically, the data sending device discards the lower priority data block in the first queue, and the data sending device further reduces
  • the data sending device may simultaneously perform the step of discarding the lower priority data block in the first queue and perform the lowering of the data block from The step of adding the rate of the first queue to the second queue, or the data sending device may perform the step of discarding the lower priority data block in the first queue and performing the lowering to join the data block from the first queue.
  • the steps of the rate of the second queue and the like are not limited in the embodiment of the present application.
  • the second queue is a transmission buffer queue corresponding to the TCP protocol in a kernel state of an operating system of the data sending device.
  • the valuable kernel mode sending buffer can be effectively utilized to avoid the accumulation of data blocks in the kernel state.
  • the mobile video capture device transmits video stream data to the remote monitoring node through TCP in real time, which may include dynamic inspection or drone inspection.
  • the real-time requirement for the video stream data is high, so that the remote monitoring node can respond in time.
  • the mobile video capture device is constantly moving, and the network environment may be constantly changing, how to ensure the transmission of real-time video stream data is particularly important. Therefore, by applying the data processing method provided by the embodiment of the present application on the mobile video capture device, the real-time video stream data transmission can be well guaranteed, and the overall experience of the service is improved.
  • FIG. 8A is a schematic diagram of a process of processing stream data by a real-time video capture device according to an embodiment of the present application.
  • the real-time streaming media data is the stream data described in the embodiment of the present application
  • the mobile video collecting device may be the data sending device described in the embodiment of the present application
  • the remote end The monitoring node may be the receiving device described in the embodiment of the present application.
  • the real-time streaming media data is acquired by the real-time video capturing device, and the real-time streaming media data is stored in the user state cache, and the user state cache is prioritized to obtain the priority of the real-time streaming media data.
  • the stream data stored in the user mode cache includes F1, F2, F3, F4, F5, and F6, the real-time video capture device performs the F1, F2, F3, F4, F5, and F6.
  • Priority management the priority of the F2, F3, F4, F5 and F6 can be obtained respectively.
  • the settings regarding the first threshold and the second threshold are as follows:
  • the calculation of the space-bufsize] can be such that the first threshold is equal to 20 Mb, that is, the first threshold is equal to 2.5 MB.
  • VBR 4 Mbps
  • alpha 100 ms
  • the occupancy threshold of the storage space of the kernel state cache can be set to 50 KB. If the network quality is deteriorated and the storage space of the kernel state cache is 50 KB, the real-time video capture device can be used.
  • the packet loss operation is performed according to the priority, as shown in FIG. 8A, F2 can be discarded, and F1, F5, F4, F3, and F6 are sequentially transmitted in order of priority.
  • the link-rate is 2 Mbps
  • the real-time video capture device can be dynamically modified to a corresponding calculated value of 0.1 MB. (min[max(VBR,link-rate)*beta,user-space-bufsize]). It is to be understood that the calculated value is only an example and should not be construed as limiting the embodiments of the present application. It can be understood that the link rate can be understood as the rate of the link between the data transmitting device and the data receiving device of the TCP connection.
  • the data processing method may be as shown in FIG. 8B.
  • the data processing method in the embodiment of the present application is illustrated by taking the first threshold value of 2.5 MB and the second threshold value of 50 KB as an example.
  • the data processing method can be as follows:
  • the real-time video collection device collects data blocks of real-time streaming media data.
  • 802 according to the amount of data buffered by the kernel state and the amount of data cached by the user state, determine whether to perform a packet loss operation; if packet loss is required, execute 806; otherwise, execute 803;
  • the operation of the packet loss operation is determined according to the occupation of the storage space of the kernel state cache and the storage space of the user mode cache. Specifically, the storage space of the kernel state cache exceeds 50 KB, and the user mode cache is used. The storage space is occupied by more than 2.5MB. If the size of the storage space of the user mode cache does not exceed 2.5 MB, step 603 may be performed.
  • the buffer management of the real-time streaming transmission combining the user state cache and the kernel state cache can effectively avoid the queuing time of the data blocks in the kernel state cache;
  • the quality and video bit rate determine the size of the kernel mode cache and the size of the user mode cache.
  • the real-time streaming rate can be adjusted according to the network quality, avoiding the long queue time and effectively avoiding real-time streaming data. The overall delay of the transmission.
  • FIG. 8C is a schematic diagram of a process of processing flow data by a remote control device according to an embodiment of the present application.
  • the real-time control command may be the flow data described in the embodiment of the present application, and the remote control device may be the data sending device described in the embodiment of the present application.
  • the remote control device needs to remotely operate the desktop operating system through the user interface (UI).
  • UI user interface
  • the mouse operation may have a time lag.
  • the most real-time control command can be sent to the receiving device, and the packets queued in the preamble (lower priority) are discarded to improve the user experience.
  • the details can be as follows:
  • the remote control device receives three control commands input by the user, as shown in FIG. 8C, the three control commands may be:
  • the queuing delay in the network is higher than a threshold (500 ms)
  • the remote control device detects that the kernel mode cache has a larger occupancy (greater than VBR*100 ms), and the remote control device can prioritize the user state cache.
  • the low control command is discarded, and the most urgent current command 3 is stored in the kernel state cache, so that the control command 3 is preferentially transmitted to the receiving device, thereby improving the real-time experience of the user.
  • the method for transmitting stream data provided by the embodiment of the present application is described above.
  • the data sending device provided by the embodiment of the present application is specifically described below.
  • FIG. 9A is a schematic structural diagram of a data sending device according to an embodiment of the present disclosure.
  • the data sending device may be used to perform the method for sending stream data described in the foregoing embodiments, where the data sending device is applied to TCP.
  • the device of the protocol, and the application runs in the operating system of the data processing device.
  • the data sending device includes at least:
  • the storage unit 901 is configured to store the data block delivered by the application into the first queue, where the data block is stream data, and the first queue is a queue in a user state of an operating system of the data sending device, where the first The queue is used to place a data block of the stream data to be sent;
  • the joining unit 902 is configured to add at least one data block in the first queue to the second queue, where the second queue is an operation of the data sending device, if the amount of data in the second queue meets a preset condition.
  • the send buffer queue corresponding to the TCP protocol;
  • the sending unit 903 is configured to send data to the data receiving device connected to the TCP by using the second queue.
  • the preset condition is that the amount of data in the second queue does not exceed the second threshold, or the occupancy ratio of the amount of data in the second queue does not exceed the third threshold.
  • At least one data block in the first queue is added to the second block by storing the data block delivered by the application in the first queue, and then the data amount in the second queue meets the preset condition.
  • the queue can effectively reduce the accumulation of data blocks in the second queue, thereby effectively reducing the transmission delay of the stream data, improving the real-time performance of the data stream transmitted by using the TCP protocol, and improving the efficiency of stream data transmission.
  • the data sending device further includes:
  • the discarding unit 904 is configured to discard the foregoing when the amount of data in the second queue exceeds the second threshold, or when the occupancy ratio of the data amount in the second queue exceeds the third threshold. A lower priority data block in a queue.
  • the data sending device further includes:
  • the rate reducing unit 905 is configured to: when the amount of data in the second queue exceeds the second threshold, or if the occupancy ratio of the amount of data in the second queue exceeds the third threshold, The rate at which the data block joins the second queue from the first queue.
  • the foregoing data sending device further includes:
  • the suspending unit 906 is configured to: when the amount of data in the second queue exceeds the second threshold, suspend adding at least one data block in the first queue to the second queue;
  • the adding unit 902 is further configured to continue to perform the foregoing adding at least one data block in the first queue to the second queue if the amount of data in the second queue does not exceed the second threshold.
  • the suspending unit 906 is further configured to: when the occupancy ratio of the data volume in the second queue exceeds the third threshold, suspend adding at least one data block in the first queue to the Second queue;
  • the adding unit 902 is further configured to continue to perform the foregoing adding at least one data block in the first queue to the second queue if the occupancy ratio of the data volume in the second queue does not exceed the third threshold.
  • the discarding unit 904 is further configured to discard the data block with a lower priority in the first queue when the amount of data in the first queue exceeds the first threshold.
  • the data block sent by the application is a data block of the video stream
  • the data block of the video stream includes a bidirectional predictive coding B frame, an inter prediction coding P frame, and an intra coding I frame.
  • the B frame, the P frame, and the I frame are referenced by other frames are the same, or the B frame, the P frame, and the I frame are respectively the first of a group of consecutive multiple frames In the case of frames,
  • the priority of the B frame is lower than the priority of the P frame, and the priority of the P frame is lower than the priority of the I frame.
  • the application is a streaming media application
  • the data block sent by the application is a data block of the video stream
  • the first threshold is a threshold determined by a video bit rate, a rate of the TCP connection, an amount of data in the first queue, and a first adjustment parameter, where the first adjustment parameter is a data block of a video stream that can be buffered by the TCP connection. Length of time;
  • the second threshold is a threshold determined by a video bit rate and a second adjustment parameter
  • the second adjustment parameter is a delay parameter, which is used to characterize a degree of delay that the streaming application can tolerate.
  • the depositing unit 901 is specifically configured to store the data block delivered by the application into the first queue by calling the target application programming interface API.
  • the depositing unit 901 is configured to store, by the proxy of the application, the data block delivered by the application into the first queue, where the proxy is a process running in a user state of the operating system.
  • FIG. 9A and FIG. 9B is also used to execute the first embodiment (FIG. 3), the second embodiment (FIG. 6), the third embodiment (FIG. 7), and the fourth implementation.
  • FIG. 8B the specific implementation of each unit will not be described in detail here.
  • the joint scheduling module 401 in the data optimization architecture 400 provided in FIG. 4 may be specifically configured to control the storage unit 901 to store the data block delivered by the application into the first queue, where the first queue is equivalent to
  • the kernel mode cache control module 403 can be used to control the join unit 902 to add at least one data block in the first queue to the second queue.
  • FIG. 10 is a schematic structural diagram of still another data sending device according to an embodiment of the present disclosure.
  • the data sending device is a device applied to a TCP protocol, and an application is run in an operating system of the data sending device, and
  • the queue in the user mode of the operating system of the data transmitting device is called the first queue.
  • the sending buffer queue corresponding to the TCP protocol is called the second queue.
  • the data transmitting apparatus includes at least a processing circuit 1001, a storage medium 1002, and a transceiver 1003.
  • the processing circuit 1001, the storage medium 1002, and the transceiver 1003 are connected to each other through a bus 1004.
  • the storage medium 1002 includes, but is not limited to, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read only memory (EPROM), Or a compact disc read-only memory (CD-ROM), the storage medium 1002 is used for related instructions and data.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read only memory
  • CD-ROM compact disc read-only memory
  • the transceiver 1003 is configured to receive and transmit data.
  • the transceiver 1003 may include a network card, or an antenna, etc.
  • the processing circuit 1001 performs, by using the transceiver 1003, a step of transmitting data to the data receiving device connected to the TCP through the second queue, where the data is sent.
  • the data receiving device connected to the TCP through the second queue, where the data is sent.
  • it may be a data block, and the data block is stream data.
  • the processing circuit 1001 may be one or more central processing units (CPUs), or the processing circuit 1001 may be one or more network processors (NPs), or the processing circuit 1001 may also It is one or more application processors (APs), or the processing circuit 1001 may also be a combination of a CPU and an NP, etc., or the processing circuit 1001 may also be a combination of a CPU and an AP, etc., the present application The embodiment is not limited.
  • the processing circuit 1001 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the processing circuit 1001 in the data transmitting device is configured to read the program code stored in the storage medium 1002, and perform the following operations:
  • Data is transmitted to the data receiving end of the TCP connection through the second queue.
  • the data is sent to the data receiving end of the TCP connection by using the second queue.
  • the transceiver transmits data to the data receiving end of the TCP connection by using the second queue.
  • the preset condition is that the amount of data in the second queue does not exceed the second threshold, or the occupancy ratio of the amount of data in the second queue does not exceed the third threshold.
  • processing circuit 1001 in the data sending device is further configured to read the program code stored in the storage medium 1002, and perform the following operations:
  • the data transmitting device is further configured to read the program code stored in the storage medium 1002, and perform the following operations:
  • the rate at which the data block is added to the second queue from the first queue is reduced.
  • the processing circuit 1001 in the data sending device is further configured to read the program code stored in the storage medium 1002, and perform the following operations:
  • the processing circuit 1001 in the data sending device is further configured to read the program code stored in the storage medium 1002, and execute Do the following:
  • processing circuit 1001 in the data sending device is further configured to read the program code stored in the storage medium 1002, and perform the following operations:
  • the data block with the lower priority in the first queue is discarded.
  • the data block sent by the application is a data block of the video stream
  • the data block of the video stream includes a bidirectional predictive coding B frame, an inter prediction coding P frame, and an intra coding I frame.
  • the B frame, the P frame, and the I frame are referenced by other frames are the same, or the B frame, the P frame, and the I frame are respectively a group of consecutive multiple frames.
  • the B frame, the P frame, and the I frame are respectively a group of consecutive multiple frames.
  • the priority of the B frame is lower than the priority of the P frame, and the priority of the P frame is lower than the priority of the I frame.
  • the application is a streaming media application
  • the data block sent by the application is a data block of the video stream
  • the first threshold is a threshold determined by a video bit rate, a rate of the TCP connection, an amount of data in the first queue, and a first adjustment parameter, where the first adjustment parameter is a data block of a video stream that can be buffered by the TCP connection. Length of time;
  • the second threshold is a threshold determined by the video bit rate and the second adjustment parameter, and the second adjustment parameter is a delay parameter, and the delay parameter is used to represent a degree of delay that the streaming application can tolerate.
  • the foregoing processing circuit 1001 stores the data block delivered by the application into the first queue, including:
  • the data block delivered by the above application is stored in the first queue by calling the target application programming interface API.
  • the foregoing processing circuit 1001 storing the data block delivered by the application into the first queue includes:
  • the data block delivered by the application is stored in the first queue by the proxy of the application, and the proxy is a process running in the user state of the operating system.
  • the processing circuit 1001 can also be used to perform the operations performed by the depositing unit 901 and the adding unit 902 shown in FIGS. 9A and 9B, or the processing circuit 1001 can also be used to execute the map.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, when the program instruction is executed by the processing circuit of the data sending device, the processing circuit is The method flow shown in the previous embodiment is performed.
  • the above program instructions are executable by the processing circuit to:
  • Data is transmitted to the data receiving end of the TCP connection through the second queue.
  • the computer readable storage medium described above may be an internal storage unit of a data transmitting device, such as a hard disk or a memory.
  • the computer readable storage medium may be an external storage device of the data transmitting device, such as a plug-in hard disk provided on the data transmitting device, a smart memory card (SMC), and a secure digital (SD). Cards, flash cards, and more.
  • SMC smart memory card
  • SD secure digital
  • the embodiment of the present application further provides a computer program product.
  • the computer program product is run on a data sending device, the method flow shown in the foregoing embodiment is implemented.
  • FIG. 11 is a structural block diagram of an implementation manner of a data transmitting device as a terminal device.
  • the terminal device 110 may include: an application chip 110 and a memory 115 (one or more computers).
  • a readable storage medium A readable storage medium
  • RF radio frequency
  • the peripheral system 117 is mainly used to implement the interaction function between the terminal device 110 and the user/external environment, and mainly includes input and output devices.
  • the peripheral system 117 can include a touch screen controller 118, a camera controller 119, an audio controller 120, and a sensor management module 121.
  • Each controller may be coupled to a respective peripheral device (such as touch screen 123, camera 124, audio circuit 125, and sensor 126).
  • the touch screen 123 may be configured with a touch screen of a self-capacitive touch panel or a touch screen configured with an infrared touch panel.
  • camera 124 can be a 3D camera.
  • the peripheral system 117 may also include other I/O peripherals.
  • the terminal device may obtain video stream data through the camera 124, or obtain audio stream data and the like through the audio circuit 125.
  • the application chip 110 can be integrated to include one or more processors 111, a clock module 112, and a power management module 113.
  • the clock module 112 integrated in the application chip 110 is mainly used to generate a clock required for data transmission and timing control for the processor 111.
  • the power management module 113 integrated in the application chip 110 is mainly used to provide a stable, high-precision voltage for the processor 111, the radio frequency module 116, and the peripheral system. It can be understood that the terminal device may include other chips, such as a baseband chip, in addition to the application chip.
  • a radio frequency (RF) module 116 is used to receive and transmit radio frequency signals, primarily integrating a receiver and a transmitter.
  • a radio frequency (RF) module 116 communicates with the communication network and other communication devices via radio frequency signals.
  • the radio frequency (RF) module 116 may include, but is not limited to: an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chip, a SIM card, and Storage media, etc.
  • a radio frequency (RF) module 116 can be implemented on a separate chip.
  • Memory 115 is coupled to processor 111 for storing various software programs and/or sets of instructions.
  • memory 115 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
  • the memory 115 can store an operating system (hereinafter referred to as a system) such as an embedded operating system such as ANDROID, IOS, WINDOWS, or LINUX.
  • the memory 115 can also store a network communication program that can be used to communicate with one or more additional devices, one or more terminal devices, one or more network devices.
  • the memory 115 can also store a user interface program, which can realistically display the content of the application through a graphical operation interface, and receive user control operations on the application through input controls such as menus, dialog boxes, and keys. .
  • video stream data or audio stream data or control commands and the like may also be stored in the memory 115.
  • the memory 115 can also store one or more applications. As shown in FIG. 11, these applications may include: a social application (such as Facebook), an image management application (such as an album), a map application (such as Google Maps), a browser (such as Google Chrome), and the like.
  • a social application such as Facebook
  • an image management application such as an album
  • a map application such as Google Maps
  • a browser such as Google Chrome
  • terminal device 110 is only an example provided by the embodiments of the present application, and the terminal device 110 may have more or less components than the illustrated components, may combine two or more components, or may have Different configurations of components are implemented.
  • the terminal device shown in FIG. 11 can also be used to perform the display method of the terminal device provided by the embodiment of the present application, such as the terminal device can be used to perform the method shown in FIG. 3, and other embodiments. The implementation of this, not detailed here.
  • the program can be stored in a computer readable storage medium, when the program is executed
  • the flow of the method embodiments as described above may be included.
  • the foregoing storage medium includes various media that can store program codes, such as a ROM or a random access memory RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请实施例提供了一种发送流数据的方法及数据发送设备,应用于传输控制协议TCP连接的数据发送端,数据发送端的操作系统中运行有应用,包括:将应用下发的数据块存入第一队列,数据块为流数据,第一队列为数据发送端的操作系统的用户态中的队列,第一队列用于放置待发送的流数据的数据块;在第二队列中的数据量满足预设条件的情况下,将第一队列中的至少一个数据块加入第二队列,第二队列为数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;数据发送端通过第二队列,向TCP连接的数据接收端发送数据。实施本申请,能够有效减少内核态中流数据的堆积,减少流数据的传输时延,提高了使用TCP协议传输流数据的实时性。

Description

发送流数据的方法及数据发送设备 技术领域
本申请实施例涉及通信技术领域,尤其涉及一种发送流数据的方法及数据发送设备。
背景技术
流式传输分为顺序流式(Progressive Streaming)传输和实时流式(Realtime Streaming)传输两种传输方式,流式传输所传输的数据可以称为流数据(data stream),广泛适用于例如音视频、网络监控、网络游戏、金融服务等场景。顺序流式传输是顺序下载,例如在下载一音视频文件的同时,用户可以观看该音视频已经下载完成的部分。实时流式传输总是实时传输,例如实时现场直播,这种情况下,往往音视频需要被实时的观看到。因此实时流式传输不能被延迟或过度缓冲,需要应用程序立即处理接收到的数据并呈现给用户。实时流式传输的使用场景包括:网络视频直播、远程桌面共享、视频监控以及视频电话等等。使用实时流式传输来传输音视频文件,用户可以边下载文件边播放,无需下载完整的音视频文件即可观看或者收听,可以节约数分钟甚至数小时的下载时间,对系统缓存容量的需求也会降低。另一方面,通信领域常用的传输控制协议(Transmission Control Protocol,TCP)也可以用于传输流数据。使用TCP协议的情况下,发送流数据的应用程序(具体可以指实时流媒体应用)将流数据的数据块下发到内核态缓存后,该数据块就位于内核态缓存中,直到发送端确认接收端成功接收该数据块后,该数据块才被从发送端的内核态缓存中删除。这样的处理方式有较长的排队时延(指报文发送前在内核态的发送队列中排队的时间),也拉长了流数据传输的整体的时延,而流数据显然是一种对传输的实时性要求高,对传输时延容忍度小的应用程序使得用TCP协议传输无法满足流数据。例如,在一个移动网络场景下:带宽为10Mbps,待传输的视频流的数据块为4MB,由于4MB等于32Mb,因此,内核态缓存中的视频流的数据块的排队时延可能需要3.2s,这显然对保障流数据传输的实时性影响巨大。所以现有的TCP协议传输方式的传输流数据,不能满足流数据的传输时延的要求。
发明内容
本申请实施例提供了一种发送流数据的方法及数据发送设备,能够有效减少内核态中流数据的堆积,减少流数据的传输时延,提高了使用TCP协议传输流数据的实时性。
第一方面,本申请实施例提供了一种发送流数据的方法,所述方法应用于传输控制协议TCP连接的数据发送端,所述数据发送端的操作系统中运行有应用,所述方法包括:将所述应用下发的数据块存入第一队列,所述数据块为流数据,所述第一队列为所述数据发送端的操作系统的用户态中的队列,所述第一队列用于放置待发送的流数据的数据块;在第二队列中的数据量满足预设条件的情况下,将所述第一队列中的至少一个数据块加入所述第二队列,所述第二队列为所述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;所述数据发送端通过所述第二队列,向所述TCP连接的数据接收端发送数据。实施本申请实施例,通过将应用下发的数据块存入第一队列,然后在第二队列中的数据量满足预设条件的情况下,将第一队列中的至少一个数据块加入第二队列,可以有效减少第二队列中数据块的堆积,从而有效减少了流数据的传输时延,提高了使用TCP协议传输流数据的实时性,提高了 流数据传输的效率。
在一个可选的实现方式中,所述预设条件为:所述第二队列中的数据量不超过第二阈值,或者,所述第二队列中的数据量的占用比不超过第三阈值。
在一个可选的实现方式中,所述方法还包括:在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,丢弃所述第一队列内优先级较低的数据块。实施本申请实施例,在网络质量不好或网络质量变化剧烈的情况下,第二队列中的数据块可能无法及时发送出去,导致第二队列中的数据量过多,这时会加大数据块在内核态中的排队时延,利用本申请实施例所提供的方法,可以优先将优先级较高的数据块加入第二队列中,丢弃优先级较低的数据块,从而最大限度的满足了流数据的传输时延的要求,减少了流数据的传输时延。
在一个可选的实现方式中,在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,所述方法还包括:降低将数据块从所述第一队列加入所述第二队列的速率。实施本申请实施例,在将数据块从第一队列加入第二队列的速率可变的情况下,通过降低将数据块从第一队列加入第二队列的速率,可以有效避免了第二队列中的数据块过多而无法存入导致应用卡顿的现象。
在一个可选的实现方式中,在所述第二队列中的数据量超过所述第二阈值的情况下,所述方法还包括:暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量不超过所述第二阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
在一个可选的实现方式中,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,所述方法还包括:暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量的占用比不超过所述第三阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。实施本申请实施例,在将数据块从第一队列加入第二队列的速率不变的情况下,通过暂停将第一队列中的数据块加入第二队列,可以有效避免第二队列中的数据块过多,而增加该数据块的传输时延。
在一个可选的实现方式中,所述方法还包括:在所述第一队列中的数据量超过第一阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
在一个可选的实现方式中,所述应用下发的数据块为视频流的数据块,所述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,在所述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在所述视频流的数据块中包括连续多帧未被其他帧引用,则所述连续多帧中的第一帧的优先级最低。实施本申请实施例,通过确定视频流的数据块中各个帧的优先级高低,从而在网络质量不好时,利用上述优先级机制,可以优先的把优先级较高的帧发送出去,丢弃那些优先级较低的帧,保证将重要的帧发送出去,最大限度的保障了通信质量,且不会对视频的播放造成影响,最大限度的提高了用户体验。
在一个可选的实现方式中,在所述B帧、所述P帧和所述I帧被其他帧引用的次数相同的情况下,或者,在所述B帧、所述P帧和所述I帧分别为一组连续多帧中的第一帧的情况下,所述B帧的优先级低于所述P帧的优先级,所述P帧的优先级低于所述I帧的优先级。
在一个可选的实现方式中,所述应用为流媒体应用,所述应用下发的数据块为视频流的数据块;所述第一阈值为由视频比特率、所述TCP连接的速率、所述第一队列中的数据量以及第一调节参数确定的阈值,所述第一调节参数为所述TCP连接能够缓存的视频流的数据块 的时长;所述第二阈值为由所述视频比特率以及第二调节参数确定的阈值,所述第二调节参数为延迟参数,所述延迟参数用于表征所述流媒体应用可容忍的延迟程度。实施本申请实施例,通过视频比特率、TCP连接的速率、第一队列中的数据量以及第一调节参数来设置第一阈值,从而避免由于第一阈值设置为固定值,而导致无法有效利用第一队列的情况;以及通过视频比特率以及第二调节参数来设置第二阈值,动态地调整第二队列的数据量的占用阈值,从而可以综合判断第二队列的数据量的占用情况,提高第二队列的空间利用率。
在一个可选的实现方式中,所述将所述应用下发的数据块存入第一队列包括:通过调用目标应用程序编程接口API将所述应用下发的数据块存入所述第一队列。
在一个可选的实现方式中,所述将所述应用下发的数据块存入第一队列包括:通过所述应用的代理将所述应用下发的数据块存入所述第一队列,所述代理为运行在所述操作系统的用户态的一个进程。
第二方面,本申请实施例还提供了一种数据发送设备,所述数据发送设备为应用传输控制协议TCP的设备,所述数据发送设备的操作系统中运行有应用,所述数据发送设备包括:存入单元,用于将所述应用下发的数据块存入第一队列,所述数据块为流数据,所述第一队列为所述数据发送设备的操作系统的用户态中的队列,所述第一队列用于放置待发送的流数据的数据块;加入单元,用于在所述第二队列中的数据量满足预设条件的情况下,将所述第一队列中的至少一个数据块加入所述第二队列,所述第二队列为所述数据发送设备的操作系统的内核态中,TCP协议对应的发送缓存队列;发送单元,用于通过所述第二队列,向所述TCP连接的数据接收设备发送数据。
在一个可选的实现方式中,所述预设条件为:所述第二队列中的数据量不超过第二阈值,或者,所述第二队列中的数据量的占用比不超过第三阈值。
在一个可选的实现方式中,所述数据发送设备还包括:丢弃单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
在一个可选的实现方式中,所述数据发送设备还包括:降低速率单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,降低将数据块从所述第一队列加入所述第二队列的速率。
在一个可选的实现方式中,所述数据发送设备还包括:暂停单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下暂停将所述第一队列中的至少一个数据块加入所述第二队列;所述加入单元,还用于在所述第二队列中的数据量不超过所述第二阈值的情况下,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
在一个可选的实现方式中,所述数据发送设备还包括:暂停单元,用于在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列;所述加入单元,还用于在所述第二队列中的数据量的占用比不超过所述第三阈值的情况下,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
在一个可选的实现方式中,所述丢弃单元,还用于在所述第一队列中的数据量超过第一阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
在一个可选的实现方式中,所述应用下发的数据块为视频流的数据块,所述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,在所述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在所述视频流的数据块中包含连续多帧未被其他帧引用,则所述连续多帧中的第一帧的优先级最低。
在一个可选的实现方式中,在所述B帧、所述P帧和所述I帧被其他帧引用的次数相同的情况下,或者,在所述B帧、所述P帧和所述I帧分别为一组连续多帧中的第一帧的情况下,所述B帧的优先级低于所述P帧的优先级,所述P帧的优先级低于所述I帧的优先级。
在一个可选的实现方式中,所述应用为流媒体应用,所述应用下发的数据块为视频流的数据块;所述第一阈值为由视频比特率、所述TCP连接的速率、所述第一队列中的数据量以及第一调节参数确定的阈值,所述第一调节参数为所述TCP连接能够缓存的视频流的数据块的时长;所述第二阈值为由所述视频比特率以及第二调节参数确定的阈值,所述第二调节参数为延迟参数,所述延迟参数用于表征所述流媒体应用可容忍的延迟程度。
在一个可选的实现方式中,所述存入单元,具体用于通过调用目标应用程序编程接口API将所述应用下发的数据块存入所述第一队列。
在一个可选的实现方式中,所述存入单元,具体用于通过所述应用的代理将所述应用下发的数据块存入所述第一队列,所述代理为运行在所述操作系统的用户态的一个进程。
第三方面,本申请实施例还提供了一种数据发送设备,所述数据发送设备为应用传输控制协议TCP的设备,所述数据发送设备的操作系统中运行有应用,所述数据发送设备包括:处理电路、存储介质和收发器;其中,所述处理电路、所述存储介质和所述收发器通过线路互联,所述存储介质中存储有程序指令;所述程序指令被所述处理电路执行时,使所述处理电路执行以下操作:将所述应用下发的数据块存入第一队列,所述数据块为流数据,所述第一队列为所述数据发送端的操作系统的用户态中的队列,所述第一队列用于放置待发送的流数据的数据块;在第二队列中的数据量满足预设条件的情况下,将所述第一队列中的至少一个数据块加入所述第二队列,所述第二队列为所述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;所述收发器,用于通过所述第二队列,向所述TCP连接的数据接收设备发送数据。
在一个可选的实现方式中,所述预设条件为:所述第二队列中的数据量不超过第二阈值,或者,所述第二队列中的数据量的占用比不超过第三阈值。
在一个可选的实现方式中,所述处理电路,还用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
在一个可选的实现方式中,所述处理电路,还用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,降低将数据块从所述第一队列加入所述第二队列的速率。
在一个可选的实现方式中,所述处理电路,还用于在所述第二队列中的数据量超过所述第二阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量不超过所述第二阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
在一个可选的实现方式中,所述处理电路,还用于在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量的占用比不超过所述第三阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
在一个可选的实现方式中,所述处理电路,还用于在所述第一队列中的数据量超过第一阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
在一个可选的实现方式中,所述应用下发的数据块为视频流的数据块,所述视频流的数 据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,在所述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在所述视频流的数据块中包括连续多帧未被其他帧引用,则所述连续多帧中的第一帧的优先级最低。
在一个可选的实现方式中,在所述B帧、所述P帧和所述I帧被其他帧引用的次数相同的情况下,或者,在所述B帧、所述P帧和所述I帧分别为一组连续多帧中的第一帧的情况下,所述B帧的优先级低于所述P帧的优先级,所述P帧的优先级低于所述I帧的优先级。
在一个可选的实现方式中,所述应用为流媒体应用,所述应用下发的数据块为视频流的数据块;所述第一阈值为由视频比特率、所述TCP连接的速率、所述第一队列中的数据量以及第一调节参数确定的阈值,所述第一调节参数为所述TCP连接能够缓存的视频流的数据块的时长;所述第二阈值为由所述视频比特率以及第二调节参数确定的阈值,所述第二调节参数为延迟参数,所述延迟参数用于表征所述流媒体应用可容忍的延迟程度。
在一个可选的实现方式中,所述处理电路,具体用于通过调用目标应用程序编程接口API将所述应用下发的数据块存入所述第一队列。
在一个可选的实现方式中,所述处理电路,具体用于通过所述应用的代理将所述应用下发的数据块存入所述第一队列,所述代理为运行在所述操作系统的用户态的一个进程。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被数据发送设备的处理电路执行时,使所述数据发送设备的处理电路执行上述第一方面或者第一方面的任意一种可能实现方式所描述的方法。
第五方面,本申请实施例还提供了一种计算机程序产品,当所述计算机程序产品在数据发送设备上运行时,使第一方面或第一方面的任意一种可能的实现方式得以实现。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1A是本申请实施例提供的一种远程桌面共享的场景示意图;
图1B是本申请实施例提供的一种视频监控的场景示意图;
图2A是本申请实施例提供的视频编解码示意图;
图2B是本申请实施例提供的一种系统架构示意图;
图3是本申请实施例提供的一种发送流数据的方法的流程示意图;
图4是本申请实施例提供的一种数据优化架构的示意图;
图5A是本申请实施例提供的一种数据优化架构的部署方式;
图5B是本申请实施例提供的另一种数据优化架构的部署方式;
图6是本申请实施例提的另一种发送流数据的方法的流程示意图;
图7是本申请实施例提供的又一种发送流数据的方法的流程示意图;
图8A是本申请实施例提供的实时视频采集设备处理流数据的过程示意图;
图8B是本申请实施例提供的一种发送流数据的方法的具体场景示意图;
图8C是本申请实施例提供的远程控制设备处理流数据的过程示意图;
图9A是本申请实施例提供的一种数据发送设备的结构示意图;
图9B是本申请实施例提供的另一种数据发送设备的结构示意图;
图10是本申请实施例提供的又一种数据发送设备的结构示意图;
图11是本申请实施例提供的一种终端设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图对本申请实施例中技术方案进行描述。
以下将具体介绍本申请实施例中用到的技术术语。
内核态:运行在内核地址空间的进程,也可以称运行在内核地址空间的进程为处于内核态;本申请实施例中,称处于操作系统的内核态中,与TCP协议对应的发送缓存队列为第二队列。应理解,一种实现方式下,一个TCP连接对应一个发送缓存队列。
用户态:运行在用户地址空间的进程,也可以称运行在用户地址空间的进程为处于用户态;本申请实施例中,称操作系统的用户态中的队列为第一队列。
数据块:一组或几组按顺序连续排列在一起的记录;本申请实施例中,数据块为流数据的数据块,其中,流式传输所传输的数据称为流数据(data stream),因此不应将本申请中的数据块理解为具有限定意义。
I(Instantaneous Decode Refresh)帧:I帧又称帧内编码帧,是一种自带全部信息的独立帧,无需参考其他帧便可独立进行解码,可以简单理解为一张静态画面。视频序列中的第一个帧始终都是I帧,或者,也可以称I帧为关键帧。
P(Predicted)帧:P帧又称帧间预测编码帧,需要参考前面的I帧才能进行编码。表示的是当前帧画面与前一帧(前一帧可能是I帧也可能是P帧)画面的差别。解码时需要用之前缓存的画面叠加上本帧定义的差别,生成最终画面。与I帧相比,P帧通常占用更少的数据位,但不足是,由于P帧对前面的P参考帧和I参考帧有着复杂的依耐性,因此对传输错误非常敏感。
B(Bidirectional Predicted)帧:B帧又称双向预测编码帧,B帧记录的是本帧与前后帧的差别。也就是说要解码B帧,不仅要解码之前的缓存画面,还要解码之后的画面,通过前后画面的与本帧数据的叠加取得最终的画面。B帧压缩率高,但是对解码性能要求较高。
视频比特率:视频比特率是视频流正常播放所需的网络比特率。视频比特率可以是恒定比特率(constant bit rate,CBR)或动态比特率(variable bit rate)等。在使用CBR的情况下,视频在一段时间内的平均比特率保持不变;在使用VBR的情况下,比特率可以随时间变化。在压缩时根据视频流数据确定使用什么视频比特率,这是以质量为前提兼顾文件大小的方式。
在具体实现中,流式传输又分为顺序流式传输和实时流式传输,顺序流式传输场景下,用户可以边下载边观看,如用户在使用终端设备时,可以通过下载控件来下载视频,在终端设备下载视频的同时,用户还可以观看该视频。
实时流式传输总是实时传输,例如实时现场直播。其中,实时流式传输的场景至少包括:网络视频直播、远程桌面共享、视频监控以及视频电话等等。参见图1A,图1A是本申请实施例提供的一种远程桌面共享的场景示意图,在该远程桌面共享场景下,终端设备120与其他远程终端连接,并将远程终端的桌面显示到终端设备120,该场景下,需要实时地将远程终端的桌面显示到终端设备120,因此通信需要实时的,且是基于TCP传输的。
参见图1B,图1B是本申请实施例提供的一种视频监控的场景示意图。图1B示出了一种典型的视频实时流式传输如视频监控的场景,该视频监控的场景下,用户可以通过智能手机130中的应用程序或浏览器连接到摄像机,应用程序或浏览器连接到摄像机上的超文本传输协议(Hyper Text Transfer Protocol,HTTP)服务器,并获取设备列表,从而获得监控设备,这样用户就可以获取监控设备获得的实时视频,同时该监控设备所获取的实时视频流数据块 是实时的,基于TCP传输。
可以理解的是,实时流式传输所包含的场景不限于图1A和图1B,图1A和图1B仅为示例,不应理解为具有限定意义。图1A中的远程终端以及图1B中的监控设备可以理解为本申请实施例中的数据发送设备。
其中,传输完整的视频画面时,数据量非常大,对现有的网络和存储来说是不可接受的。为了能够使视频便于传输和存储,通常采用压缩和编解码的方式,将重复信息在发送端去掉,在接收端恢复出来,这样将大大减少视频数据文件的大小。
压缩:大多数流行的视频编解码器允许空间(帧内)压缩以及时间(帧间)压缩。压缩算法可能会发生变化,但所有视频编解码器都遵循类似的压缩方案。
编解码:编码器将多张图像进行编码后生成一段段的图像群组(Group Of Pictures,GOP),解码器在播放时则是读取一段段的GOP进行解码后再渲染显示。GOP是一组连续的画面,由一张I帧和数张B/P帧组成,是视频图像编码器和解码器存取的基本单位,它的排列顺序将会一直重复到视频结束。I帧是内部编码帧(也称为关键帧),P帧是前向预测帧(前向参考帧),B帧是双向内插帧(双向参考帧)。简单地讲,I帧是一个完整的画面,而P帧和B帧记录的是相对于I帧的变化。如果没有I帧,P帧和B帧就无法解码。参见图2A,图2A是本申请实施例提供的视频编解码示意图。可以理解的是,图2A所示的视频流编解码仅为一种示例,不应理解为对本申请实施例具有限定意义。
传输控制协议(Transmission Control Protocol,TCP)是当前互联网中广泛支持的传输协议,同时大多数应用程序将数据承载在TCP协议之上进行传输。
其中,使用TCP作为传输协议的流式传输示例:
1、远程桌面共享使用远程帧缓冲(Remote Frame Buffer,RFB)协议,通过TCP协议传输。例如远程桌面共享应用:RealVNC,TightVNC,TigerVNC,TeamViewer。
2、视频监控:在超文本传输协议(Hyper Text Transfer Protocol,HTTP)浏览器中观看实时视频监控的应用。
3、视频通话:虽然大多数视频通话使用用户数据报协议(User Datagram Protocol,UDP)作为传输层协议,但是在网络地址转换(Network Address Translation,NAT)阻止UDP流量的情况下,实时流媒体应用都会回退到使用TCP协议。如应用程序网络电话Skype,瓦次普(WhatsAppmessenger,WhatsApp)。
4、其他通用实时流式传输协议通常使用的是:1)、动态码率自适应技术(HTTP Live Streaming,HLS);2)、基于HTTP的动态自适应流(Dynamic Adaptive Streaming over HTTP,DASH);3)、实时消息传输协议(Real Time Messaging Protocol,RTMP)。
基于以上所介绍的TCP协议,在实时流媒体应用使用内核TCP协议栈传输实时视频流数据时,会面临以下重要的技术问题:
一、内核态缓存过度使用造成延迟较高,Linux内核中默认的TCP内核态缓存大小约为4MB。对于非实时业务,缓存能够有效地减少应用程序和Linux系统的交互,提高吞吐量。但是对于实时业务,如实时流式传输,过大的缓存会造成排队时延过长,产生一定的负作用。
二、无线链路的影响,如果网络链路不好,那么待传输的数据将在内核态缓存中开始缓存和排队,等待发送。如果待传输的数据为实时流数据,那么实时流数据的缓存将引入延迟,从而影响用户体验,如造成卡顿,影响用户观看等。
基于上述介绍,参见图2B,图2B是本申请实施例提供的一种系统架构示意图。套接字(Socket)是端口的意思,也即建立网络通信连接至少需要一对端口号,具体地,socket本质 是应用程序编程接口(Application Programming Interface,API),在具体工作需要调用库(lib),由于API是函数库,用于定义函数的运算逻辑,用户态里运行着应用,因此,socket衔接用户态和内核态。
其中,套接字应用程序(Socket Application):用于使用套接字编程接口开发网络通信的应用程序;
套接字应用程序编程接口(socket API):即socket编程接口;
套接字接口实现:用于套接字编程接口的内部实现;
传输层与网络层的协议实现:用于传输层与网络层的协议实现;
数据链路层协议实现:用于数据链路层协议实现,如以太网协议或WIFI协议等;
在具体实现中,数据发送设备中的应用程序调用套接字API,通过该套接字API创建套接字,从而通过协议栈实现建立与数据接收设备的TCP连接,建立TCP连接之后,数据发送设备通过该TCP连接发送流数据或接收流数据。
在图2B所示的系统架构下,本申请实施例提供了一种数据处理方法,以下将具体介绍本申请实施例所提供的数据处理方法。
参见图3,图3是本申请实施例提供的一种发送流数据的方法的流程示意图,该发送流数据的方法可应用于数据发送设备,该数据发送设备可包括手机、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(英文:mobile internet device,简称:MID)等,可选地,该数据发送设备还可以包括网络摄像头和媒体直播客户端等等,以及该数据发送设备还可以包括采集视频信息的服务器和传输实时流数据的网络中间设备等等,本申请实施例对于数据发送设备不作限定。具体地,该发送流数据的方法应用于TCP连接的数据发送端,该数据发送端的操作系统中运行有应用,该数据发送端的操作系统中运行的应用可以包括流媒体应用等等,本申请实施例不作唯一性限定。其中,该数据发送端为本申请实施例中的数据发送设备。如图3所示,该发送流数据的方法至少包括以下步骤:
301、将应用下发的数据块存入第一队列,上述数据块为流数据,上述第一队列为上述数据发送端的操作系统的用户态中的队列,上述第一队列用于放置待发送的流数据的数据块;
其中,流数据可以包括视频流数据,如在实时流式传输的场景为视频监控或视频通话的情况下,该流数据可以为由摄像头采集的视频流数据。该流数据还可以包括图像流数据,如在实时流式传输的场景为远程桌面共享的情况下,该流数据可以为数据发送设备采集的图像流数据。也就是说,本申请实施例中,数据块可以为视频流的数据块,还可以为图像流数据的数据块等。
302、在第二队列中的数据量满足预设条件的情况下,将上述第一队列中的至少一个数据块加入上述第二队列,上述第二队列为上述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;
其中,预设条件为与第二队列中的数据量相关的条件,具体地,上述预设条件为:上述第二队列中的数据量不超过第二阈值,或者,上述第二队列中的数据量的占用比不超过第三阈值。
其中,第二阈值为动态的衡量第二队列中的数据量阈值,也可以称该第二阈值为动态的衡量第二队列中的数据量的大小的阈值。该第二阈值可以为由视频比特率以及第二调节参数确定的阈值,上述第二调节参数为延迟参数。具体地,该第二调节参数可以用于表征流媒体应用可容忍的延迟程度。也就是说,第二调节参数为流媒体应用可容忍的延迟参数。其中, 不同的流媒体应用可容忍的延迟程度可能不同,因此不同的流媒体应用可能对应不同的第二调节参数;相应地,不同的流媒体应用也可能对应相同的第二调节参数,本申请实施例不作限定。
举例来说,根据视频比特率VBR和一个可调节参数alpha来综合确定第二队列中的数据量阈值,其中,alpha值反映了流媒体应用可容忍的延迟。例如假设视频比特率VBR=4Mbps,流媒体应用为对时延极度敏感的实时视频通话,最大容忍延迟alpha等于100ms,那么第二阈值等于VBR*alpha=4Mbps*0.1s=50KB。而如果该流媒体应用为对实时性要求较小的应用,例如直播或者需要容忍内容审查延迟的应用,那么alpha值可以等于5s,可计算出第二阈值等于2M。依据本申请实施例所提供的确定第二阈值的方法,可以有效的防止第二队列中的数据量阈值设置为固定值的种种弊端。如若将第二队列中的数据量阈值设置为固定值,且该固定值较小,那么一旦设置了较小的阈值,则在网络质量良好的情况下,很可能会降低视频的清晰度,以及可能导致网络带宽利用率低下的现象。
可以理解的是,本申请实施例对于第二调节参数如何设置以及该第二调节参数为多少不作限定。如第二调节参数可以根据用户明显能够感觉到卡顿的时间来确定,又如该第二调节参数还可以依据经验值来确定等等。
其中,第三阈值可以用来衡量第二队列中的数据量的占用比,如该第三阈值为80%,即表示第二队列中的数据量占用了第二队列的总数据量的80%。可选的,第三阈值可以还可以为其他值,本申请实施例不作限定。
303、上述数据发送端通过上述第二队列,向上述TCP连接的数据接收端发送数据。
实施本申请实施例,通过将应用下发的数据块存入第一队列,然后在第二队列中的数据量满足预设条件的情况下,再将第一队列中的至少一个数据块加入第二队列,可以有效减少第二队列中数据块的堆积,避免了第二队列中的数据量过多而无法存入造成卡顿的现象,而且还有效减少了数据的传输时延,提高了使用TCP协议传输数据的实时性,提高了数据传输的效率。
可选的,在网络质量不好或网络质量变化剧烈的情况下,为了进一步降低数据发送前在第二队列中的排队的时间,本申请实施例还提供了一种流数据发送的方法,可以保证优先发送优先级较高的数据块,即保留第一队列中优先级较高的数据块,而丢弃一些优先级较低的数据块。因此,基于图3所描述的数据处理方法,上述方法还包括:
在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
本申请实施例中,优先级较低的数据块可以为第一队列中优先级最低的数据块,也可以为第一队列中除优先级最高的数据块之外的数据块等,本申请实施例对于具体丢弃多少数据块不作限定。
具体地,在上述应用下发的数据块为视频流的数据块的情况下,上述视频流的数据块包括B帧、P帧和I帧,由此本申请实施例还提供了一种确定数据块的优先级的方法,如下所示:
在上述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在上述视频流的数据块中包括连续多帧未被其他帧引用,则上述连续多帧中的第一帧的优先级最低。
本申请实施例中,可以依据引用次数来确定优先级,也可以在包含连续多帧未被其他帧引用的情况下,依据连续多帧中的第一帧的优先级最低来确定优先级,以上两种实现方式本申请实施例不作唯一性限定。可选的,本申请实施例可以使用帧与帧之间引用的次数来衡量 优先级,即被其他帧引用的次数越多的帧的优先级越高。在该优先级规则下,未被其他帧引用的帧的优先级最低,若视频流的数据块中包含有连续多帧未被其他帧引用,则该连续多帧的优先级最低,若丢弃该连续多帧,则很可能会导致视频播放质量的下降,因此,在视频流的数据块中包含连续多帧未被其他帧引用,则该连续多帧中的第一帧的优先级最低。
进一步地,在依据上述优先级规则确定各个帧的优先级后,还可能会出现优先级相同的情况,因此,在上述B帧、上述P帧和上述I帧被其他帧引用的次数相同的情况下,或者,在上述B帧、上述P帧和上述I帧分别为一组连续多帧中的第一帧的情况下,
上述B帧的优先级低于上述P帧的优先级,上述P帧的优先级低于上述I帧的优先级。
可选的,数据发送设备可以为每帧流数据初始分配一个优先级,然后基于上述优先级规则分配优先级,即如果帧被其他帧引用,则该帧的优先级增加(如0.1),帧优先级可以与其他帧的引用数成正比。
可以理解的是,数据发送设备可以将应用下发的数据块存入第一队列后,为每帧数据分配优先级,也可以在将应用下发的数据块存入第一队列时,为每帧数据分配优先级等,本申请实施例对于该数据发送设备何时分配优先级不作唯一性限定。
实施本申请实施例,在网络质量不好或网络质量变化剧烈的情况下,第二队列中的数据量可能会很多,利用本申请实施例所提供的方法,可以将优先级较高的数据块发送出去,丢弃优先级较低的数据块,从而最大限度的保障了通信质量,减少了数据在内核态的发送队列中的排队时间。
可选的,在具体实现中,常常会出现网络质量不好或者网络质量变化剧烈的情况,该情况下,第二队列中的数据量可能已经足够多了,这时若再将第一队列中的数据块加入第二队列,则会严重影响数据块在内核态的发送队列中的排队时间,因此,基于上述所描述的发送流数据的方法,本申请实施例还提供了一种发送流数据的方法,如下所示:
在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,上述方法还包括:
降低将数据块从上述第一队列加入上述第二队列的速率。
可选的,在上述第二队列中的数据量超过上述第二阈值的情况下,上述方法还包括:
暂停将上述第一队列中的至少一个数据块加入上述第二队列,直到上述第二队列中的数据量不超过上述第二阈值,继续执行上述将上述第一队列中的至少一个数据块加入上述第二缓存。
可选的,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,上述方法还包括:
暂停将上述第一队列中的至少一个数据块加入上述第二队列,直到上述第二队列中的数据量的占用比不超过上述第三阈值,继续执行上述将上述第一队列中的至少一个数据块加入上述第二队列。
本申请实施例中,若将数据块从第一队列加入第二队列的速率可变,则在第二队列的数据量超过第二阈值的情况下,或者,在第二队列中的数据量的占用比超过第三阈值的情况下,数据发送设备便可以降低该速率,从而进一步减少加入第二队列的数据块。可以理解的是,本申请实施例中,可以设置一个阈值来衡量第二队列中的数据量,如第二阈值或第三阈值,也可以同时通过第二阈值和第三阈值来衡量第二队列中的数据量。举例来说,在同时通过第二阈值和第三阈值衡量第二队列中的数据量的情况下,只要第二队列中的数据量满足其中一个条件,该数据发送设备便可以降低将第一队列中的数据块加入第二队列的速率;而若第二 队列中的数据量不超过第二阈值或该第二队列中的数据量的占用比不超过第三阈值,则该数据发送设备便可以以正常的速率将第一队列中的数据块加入第二队列。又或者,第二队列中的数据量不超过第二阈值,且该第二队列中的数据量的占用比不超过第三阈值,则该数据发送设备以正常的速率将第一队列中的数据块加入第二队列。
而若将数据块从第一队列加入第二队列的速率不可变,则在第二队列的数据量超过第二阈值的情况下,数据发送设备便可以停止将数据块从第一队列加入第二队列,直到第二队列中的数据量不超过该第二阈值,则该数据发送设备便可以继续将第一队列中的数据块加入第二队列。或者,在第二队列中的数据量的占用比超过第三阈值的情况下,该数据发送设备也可以停止将数据块从第一队列加入第二队列,直到第二队列中的数据量的占用比不超过第三阈值,则该数据发送设备可以继续将第一队列中的数据块加入第二队列。可以理解的是,本申请实施例中,可以仅仅设置一个阈值,如第二阈值或第三阈值,也可以同时设置第二阈值和第三阈值来衡量第二队列的数据量情况等,本申请实施例不作限定。具体可参考上述描述,这里不再赘述。
实施本申请实施例,可以有效避免第二队列中的数据块过多而无法存入,导致应用卡顿;以及有效避免了第二队列中的数据块过多而增加数据块的传输时延的情况。
可选的,在网络质量不好的情况下,第一队列中的数据量可能也会很多,该情况下,基于上述所描述的数据处理方法,本申请实施例还提供了一种发送流数据的方法,如下所示:
在上述第一队列中的数据量超过第一阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
具体地,第一阈值为动态的衡量第一队列的数据量阈值,该第一阈值可以为由视频比特率、上述TCP连接的速率、上述第一队列中的数据量以及第一调节参数确定的阈值,上述第一调节参数为上述TCP连接能够缓存的视频流的数据块的时长。其中,不同的流媒体应用可能会对应不同的第一调节参数,相应地,不同的流媒体应用也可能对应相同的第一调节参数,本申请实施例不作限定。
应理解,本申请并不限定上述的第一阈值和下述的第二阈值的计算方式,或者取值规则,本申请中记载的具体计算方式仅为举例说明。
举例来说,根据视频比特率VBR,TCP连接的速率link-rate,一个可调节的参数beta以及第一队列的数据量(user-space-bufsize)来综合确定当前第一队列的数据量阈值。可选的,本申请实施例中第一阈值可以通过min[max(VBR,link-rate)*beta,user-space-bufsize]来计算,该第一阈值反映了当前link-rate下最大能够缓冲beta时间长度的视频流的数据块,保证在当前TCP连接下不发生等待,同时第一阈值不能超过第一队列的总数据量,例如,VBR=4Mbps,beta=1s,link-rate=10Mbps,user-space-bufsize=10MB,那么第一阈值等于1.25MB。
可选的,本申请实施例中,在数据发送设备将应用下发的数据块存入第一队列后,该数据发送设备可以依据第一队列中的数据量大小,来决定是否丢弃该第一队列内优先级较低的数据块;或者,该数据发送设备将应用下发的数据块存入第一队列后,该数据发送设备可以依据第一队列中的数据量大小以及第二队列中的数据量大小,来决定是否丢弃该第一队列内优先级较低的数据块。
实施本申请实施例,在网络质量不好或网络质量变化剧烈的情况下,第二队列中的数据量可能会非常多,以及第一队列中的数据量可能也会非常多,利用本申请实施例所提供的方法,通过将优先级较高的数据块发送出去,丢弃优先级较低的数据块,从而能够有效保证将最重要的数据块发送出去,最大限度的保障了通信质量,减少了数据块在内核态的发送队列 中排队的时间。
基于图3所描述的发送流数据的方法,参见图4,图4是本申请实施例提供的一种数据优化架构的示意图,该数据优化架构可应用于数据发送设备,具体地,可应用于数据发送设备的用户态中,如图4所示,该数据优化架构400至少可包括:
联合调度模块401、用户态缓存模块402和内核态缓存控制模块403;
具体地,该联合调度模块401,可用于获取流媒体应用下发的数据块,然后该数据块经过该联合调度模块401,将该数据块存入第一队列,具体地,该联合调度模块401可以控制将该数据块存入用户态缓存模块402;以及,该联合调度模块401还可以在第二队列中的数据量超过第二阈值的情况,或者,该第二队列中的数据量的占用比超过第三阈值的情况下,控制丢弃第一队列中内优先级较低的数据块;或者,该联合调度模块401还可以在第一队列中的数据量超过第一阈值的情况下,控制丢弃第一队列中优先级较低的数据块;或者,该联合调度模块401还可以在第一队列中无法存入数据块的情况下,控制将数据块直接存入第二队列等。
用户态缓存模块402,也可以理解为基于优先级的用户态缓存的管理模块,数据块经由联合调度模块401进入该用户态缓存模块402,经由该用户态缓存模块402进行调度,然后按照优先级策略传输数据块。
内核态缓存控制模块403,用于动态调整第二队列的数据量阈值,具体地第二队列的数据量阈值可依据图3所描述的方法来设置。其中,若流媒体应用对实时性要求较高,则该第二队列中的数据量阈值如第二阈值可以设置较小;若流媒体应用对实时性要求较低,则该第二队列中的数据量阈值如第二阈值可以设置较大。具体地,该内核态缓存控制模块403,还可以控制将数据块从第一队列加入第二队列中。
本申请实施例中的数据优化架构400部署在用户态,独立于操作系统内核。在流媒体应用完成数据块的采集之后,以及通过TCP连接发送数据块之前。具体地,流媒体应用通过调用套接字API创建套接字,并建立与数据接收设备的TCP连接后,数据发送设备便可以通过该TCP连接发送流数据的数据块。更具体地,在向数据接收设备发送流数据的数据块之前,该流数据的数据块可依次通过该数据优化架构400中的联合调度模块401以及用户态缓存模块402,从而通过用户态(如图2B中的协议栈实现)发送给TCP连接的数据接收设备。
该数据优化架构400通过获取流媒体应用下发的数据块,建立用户态的数据缓存,对数据进行分析和处理。在网络状态不好或变化频繁的场景下,降低数据块在内核态的发送队列中排队的时间,提升用户体验。本申请实施例使用用户态缓存与内核态缓存两级缓冲管理,允许在用户态缓存中对数据块进行缓冲,以及在用户态缓存中对数据块进行优化处理。可以理解的是,任何其他流媒体都可以替代视频流数据,以获得类似的效果。例如在远程桌面场景下,在发送视频流数据的同时,也会发送的控制命令的流数据。这些控制命令的流数据可以在用户态进行缓冲,以实现网络恶化时,根据优先级对数据进行排序和丢弃。
基于图4所提供的数据优化架构400,本申请实施例提供了两种架构部署方式,如图5A和图5B所示。
第一种架构部署方式,参见图5A,通过库(动态库或静态库)和API的方式。流媒体应用通过调用本申请实施例提供的数据优化架构400的API,直接将数据块发送至数据优化架构400,由数据优化架构400对数据块进行处理后,发送到操作系统内核。但是该种情况下的部署方式需要应用程序替换原有的系统调用。也就是说,使用图5A所提供的部署方式时,该流媒体应用可以由数据优化架构400的开发商来开发。
因此,该情况下,图3所描述的发送流数据的方法中,将上述应用下发的数据块存入第一队列包括:
通过调用目标应用程序编程接口API将上述应用下发的数据块存入上述第一队列。
第二种架构部署方式,参见图5B,通过代理的方式。本申请实施例中的数据优化架构400作为一个单独的进程运行在用户态之中,通过配置应用程序的代理(如使用iptables将流媒体应用下发的数据块全部导入到指定的进程),将所有数据块截获到本申请实施例中的数据优化架构400,处理完成之后,再将数据块发送到操作系统内核。该情况下的部署方式不需要修改原有应用程序即可使用。也就是说,使用图5B所提供的部署方式时,该流媒体应用可以不由数据优化架构400的开发商来开发。
因此,该情况下,图3所描述的发送流数据的方法中,上述将上述应用下发的数据块存入第一队列包括:
通过上述应用的代理将上述应用下发的数据块存入上述第一队列,上述代理为运行在上述操作系统的用户态的一个进程。
为了更形象地描述本申请实施例所提供的发送流数据的方法,以下将结合具体的实施例来说明该发送流数据的方法。以下方法均应用于TCP连接的数据发送设备,该数据发送设备的操作系统中运行有应用。
参见图6,图6是本申请实施例提供的另一种发送流数据的方法的流程示意图,该发送流数据的方法基于图3进一步得到,如图6所示,该发送流数据的方法至少包括以下步骤:
601、数据发送设备将应用下发的数据块存入第一队列;
其中,该数据块为流数据,该第一队列为数据发送设备的操作系统的用户态中的队列,该第一队列用于放置待发送的流数据的数据块。
602、检测第二队列中的数据量是否超过第二阈值,若是,则执行603;否则,执行604;
603、丢弃第一队列内优先级较低的数据块;
604、将第一队列中的至少一个数据块加入第二队列;
可选的,在第二队列中的数据量超过第二阈值的情况下,该数据发送设备不仅可以丢弃第一队列内优先级较低的数据块,该数据发送设备还可以降低将数据块从第一队列加入第二队列的速率,或者,该数据发送设备还可以暂停将第一队列中的至少一个数据块加入第二队列,直到第二队列中的数据量不超过第二阈值,继续执行将第一队列中的至少一个数据块加入第二队列。具体地,如在数据发送设备丢弃第一队列内优先级较低的数据块,以及该数据发送设备还降低将数据块从第一队列加入第二队列的速率的情况下,本申请实施例中,该数据发送设备可以同时执行丢弃第一队列内优先级较低的数据块的步骤以及执行降低将数据块从第一队列加入第二队列的速率的步骤,或者,该数据发送设备也可以以先后顺序执行丢弃第一队列内优先级较低的数据块的步骤以及执行降低将数据块从第一队列加入第二队列的速率的步骤等,本申请实施例不作限定。
605、通过第二队列,向TCP连接的数据接收设备发送数据。
其中,在数据发送设备中的用户态空间很大的情况下,可以执行图6所描述的发送流数据的方法。可选的,在数据发送设备中的用户态空间不是很大的情况下,本申请实施例还提供了一种发送流数据的方法,如下所示:
参见图7,图7是本申请实施例提供的又一种发送流数据的方法的流程示意图,该发送流数据的方法基于图3进一步得到。如图7所示,该发送流数据的方法至少包括以下步骤:
701、数据发送设备获取应用下发的数据块;其中,数据块为流数据;
702、检测第一队列内的数据量是否超过第一阈值,若否,则执行703;若是,则执行704;
可选的,在数据发送设备检测第一队列内的数据量是否超过第一阈值之前,该数据发送设备还可以先获取该第一队列内的数据量,然后再检测该第一队列内的数据量是否超过了第一阈值。
本申请实施例中,数据发送设备可以实时地或以固定频率检测第一队列内的数据量,也可以在数据发送设备获得应用下发的数据块的情况下,来检测第一队列内的数据量等。其中,在数据发送设备获得应用下发的数据块的情况下,该数据发送设备检测第一队列内的数据量是否超过第一阈值,可以减少数据发送设备的功耗,避免实时检测或以固定频率检测而消耗数据发送设备的功耗。
703、将数据块存入第一队列;
其中,第一队列为数据发送设备的操作系统的用户态中的队列,该第一队列用于放置待发送的流数据的数据块。
704、检测第二队列中的数据量是否超过第二阈值,若是,则执行705;否则,执行706;
可选的,在数据发送设备检测第二队列中的数据量是否超过第二阈值之前,该数据发送设备还可以先获取该第二队列中的数据量,然后再检测该第二队列中的数据量是否超过了第二阈值。
705、丢弃第一队列内优先级较低的数据块;
可以理解的是,在丢弃第一队列内优先级较低的数据块的过程中,该数据发送设备还可以降低将数据块从第一队列加入第二队列的速率,或者,该数据发送设备还可以暂停将第一队列中的至少一个数据块加入第二队列,直到第二队列中的数据量不超过第二阈值,继续执行将第一队列中的至少一个数据块加入第二队列。或者,该数据发送设备还可以暂停将第一队列中的至少一个数据块加入第二队列,直到该第二队列中的数据量不超过第二阈值,且该第一队列内的数据量不超过第一阈值,继续执行将第一队列中的至少一个数据块加入第二队列,具体地,如在数据发送设备丢弃第一队列内优先级较低的数据块,以及该数据发送设备还降低将数据块从第一队列加入第二队列的速率的情况下,本申请实施例中,该数据发送设备可以同时执行丢弃第一队列内优先级较低的数据块的步骤以及执行降低将数据块从第一队列加入第二队列的速率的步骤,或者,该数据发送设备也可以以先后顺序执行丢弃第一队列内优先级较低的数据块的步骤以及执行降低将数据块从第一队列加入第二队列的速率的步骤等,本申请实施例不作限定。
706、将第一队列中的至少一个数据块加入第二队列;
其中,该第二队列为数据发送设备的操作系统的内核态中,与TCP协议对应的发送缓存队列。
707、通过第二队列,向TCP连接的数据接收设备发送数据。
实施本申请实施例,在网络质量不好或网络质量变化剧烈的情况下,可以有效地利用宝贵的内核态发送缓存,避免数据块在内核态中的堆积。
可以理解的是,以上各个实施例中所描述的实现方式各有侧重,未详尽描述的实现方式,还可以参考其他实施例。
为了更形象地描述图3、图6以及图7所描述的发送流数据的方法,以下将结合具体的场景来说明。
以实时视频监控为例,移动视频采集设备通过TCP实时传递视频流数据给远端监控节点,具体可包括动态巡检,或者无人机的巡检等。该情况下,对于视频流数据的实时性要求较高,以便于远端监控节点能够及时响应。同时由于移动视频采集设备在不断地移动,且网络环境也可能会不断地发生变化,因此如何能够保证实时视频流数据的传输尤为重要。因此通过在该移动视频采集设备上应用本申请实施例所提供的数据处理方法,能够很好地保障实时视频流数据的传输,提高业务的整体体验。
如图8A所示,图8A是本申请实施例提供的实时视频采集设备处理流数据的过程示意图。可以理解的是,在图8A所示的示意图中,实时流媒体数据为本申请实施例中所描述的流数据,移动视频采集设备可为本申请实施例中所描述的数据发送设备,远端监控节点可为本申请实施例中所描述的接收设备。
其中,实时流媒体数据被实时视频采集设备所获取,以及该实时流媒体数据被存入用户态缓存中,该用户态缓存通过进行优先级管理,从而得到实时流媒体数据的优先级。如图8A所示,假设用户态缓存中所存储的流数据包括F1、F2、F3、F4、F5和F6,则该实时视频采集设备通过对该F1、F2、F3、F4、F5和F6进行优先级管理,可分别得到该F2、F3、F4、F5和F6的优先级。
在本申请实施例中,关于第一阈值和第二阈值的设置如下所示:
第一阈值的设置,如视频比特率VBR=4Mbps,beta=2s,link-rate当前为10Mbps,user-space-bufsize等于20MB,则依据min[max(VBR,link-rate)*beta,user-space-bufsize]的计算方式,可得到第一阈值等于20Mb,即该第一阈值等于2.5MB。
第二阈值的设置,如视频比特率VBR=4Mbps,alpha=100ms,则VBR*alpha=4Mbps*0.1s=400Kbit=50KB。
如图8A所示,内核态缓存的存储空间的占用大小阈值可设置为50KB,假设网络质量变差,内核态缓存的存储空间的占用大小50KB全部被占满,则该实时视频采集设备便可以按照优先级进行丢包操作,如图8A所示,可丢弃F2,以及按照优先级顺序依次发送F1、F5、F4、F3和F6。
又举例来说,若实时视频采集设备监控到当前链路速率link-rate下降,例如进入弱覆盖时,link-rate为2Mbps,这时该实时视频采集设备可以动态修改为对应的计算值0.1MB(min[max(VBR,link-rate)*beta,user-space-bufsize])。可以理解的是,该计算值仅为一种示例,不应将其理解为对本申请实施例具有限定意义。可以理解的是,链路速率可以理解为TCP连接的数据发送设备与数据接收设备之间的链路的速率。
具体地数据处理方法可如图8B所示,图8B中是以第一阈值为2.5MB,第二阈值为50KB为例来说明本申请实施例的数据处理方法。数据处理方法可如下所示:
801、实时视频采集设备采集实时流媒体数据的数据块;
802、根据内核态缓存的数据量情况和用户态缓存的数据量情况,判断是否执行丢包操作;若须丢包,则执行806;否则执行803;
其中,根据内核态缓存的存储空间的占用情况和用户态缓存的存储空间的占用情况,判断执行丢包操作,具体可以为若内核态缓存的存储空间的占用大小超过50KB,以及用户态缓存的存储空间的占用大小超过2.5MB的情况。而若用户态缓存的存储空间的占用大小不超过2.5MB,则可以执行步骤603。
803、将实时流媒体数据的数据块存到用户态缓存;
804、获取内核缓存的数据量情况,并判断该内核态缓存的数据量是否小于50KB,若是, 则执行807;否则,执行805;
805、停止将用户态缓存中的实时流媒体数据的数据块加入内核态缓存中;
806、丢弃用户态缓存中优先级低的实时流媒体数据的数据块;
807、写入内核态缓存中,把实时流媒体数据的数据块交给内核协议栈处理。
本申请实施例中,通过设置多级缓冲机制,即将用户态缓存与内核态缓存相结合的实时流式传输的缓冲管理,可以有效避免内核态缓存中的数据块的排队的时间;同时依据网络质量以及视频比特率来决定内核态缓存的占用大小以及用户态缓存的占用大小,可以充分根据网络质量来调整实时流式传输的速率,避免排队的时间过长,以及有效避免了实时流媒体数据传输的整体的时延。
可以理解的是,图8B所示的发送流数据的方法仅仅为一种示例,因此未详尽描述的实现方式,还可以参考前述实施例,这里不再一一赘述。
如图8C所示,图8C是本申请实施例提供的远程控制设备处理流数据的过程示意图。其中,实时控制命令可以为本申请实施例中所描述的流数据,该远程控制设备可以为本申请实施例中所描述的数据发送设备。如图8C,以远程桌面场景为例,远程控制设备需要通过用户界面(User Interface,UI)远程操作桌面操作系统,在网络时延较大的情况下,鼠标的操作会出现时滞,这个时候,可以把最实时的控制命令发送给接收设备,而把前序排队的报文(优先级较低)丢弃,提高用户感受。具体可如下所示:
远程控制设备接收用户输入的三条控制指令,如图8C所示,该三条控制指令可为:
1.Move window<ABC>location<X1,Y1>;
2.Move window<ABC>location<X2,Y2>;
3.Move window<ABC>location<X3,Y3>;
其中,在网络出现排队时延高于一个阈值(500ms),该远程控制设备检测到内核态缓存的占用大小较高(大于VBR*100ms),该远程控制设备可将用户态缓存中优先级较低的控制指令丢弃,而把最紧迫的当前命令3存储到内核态缓存中,从而将该控制指令3优先发送给接收设备,提高用户的实时性感受。
以上介绍了本申请实施例所提供的发送流数据的方法,以下将具体介绍本申请实施例所提供的数据发送设备。
参见图9A,图9A是本申请实施例提供的一种数据发送设备的结构示意图,该数据发送设备可用于执行前述各个实施例所描述的发送流数据的方法,该数据发送设备为应用于TCP协议的设备,且该数据处理设备的操作系统中运行有应用。如图9A所示,该数据发送设备至少包括:
存入单元901,用于将上述应用下发的数据块存入第一队列,上述数据块为流数据,上述第一队列为上述数据发送设备的操作系统的用户态中的队列,上述第一队列用于放置待发送的流数据的数据块;
加入单元902,用于在上述第二队列中的数据量满足预设条件的情况下,将上述第一队列中的至少一个数据块加入第二队列,上述第二队列为上述数据发送设备的操作系统的内核态中,TCP协议对应的发送缓存队列;
发送单元903,用于通过上述第二队列,向上述TCP连接的数据接收设备发送数据。
具体地,上述预设条件为:上述第二队列中的数据量不超过第二阈值,或者,上述第二队列中的数据量的占用比不超过第三阈值。
实施本申请实施例,通过将应用下发的数据块存入第一队列,然后在第二队列中的数据量满足预设条件的情况下,将第一队列中的至少一个数据块加入第二队列,可以有效减少第二队列中数据块的堆积,从而有效减少了流数据的传输时延,提高了使用TCP协议传输流数据的实时性,提高了流数据传输的效率。
可选的,如图9B所示,该数据发送设备还包括:
丢弃单元904,用于在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
可选的,如图9B所示,该数据发送设备还包括:
降低速率单元905,用于在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,降低将数据块从上述第一队列加入上述第二队列的速率。
可选的,如图9B所示,上述数据发送设备还包括:
暂停单元906,用于在上述第二队列中的数据量超过上述第二阈值的情况下,暂停将上述第一队列中的至少一个数据块加入上述第二队列;
上述加入单元902,还用于在上述第二队列中的数据量不超过上述第二阈值的情况下,继续执行上述将上述第一队列中的至少一个数据块加入第二队列。
可选的,暂停单元906,还用于在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列;
上述加入单元902,还用于在上述第二队列中的数据量的占用比不超过第三阈值的情况下,继续执行上述将上述第一队列中的至少一个数据块加入上述第二队列。
具体地,上述丢弃单元904,还用于在上述第一队列中的数据量超过第一阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
具体地,上述应用下发的数据块为视频流的数据块,上述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,
在上述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在上述视频流的数据块中包含连续多帧未被其他帧引用,则上述连续多帧中的第一帧的优先级最低。
其中,在上述B帧、上述P帧和上述I帧被其他帧引用的次数相同的情况下,或者,在上述B帧、上述P帧和上述I帧分别为一组连续多帧中的第一帧的情况下,
上述B帧的优先级低于上述P帧的优先级,上述P帧的优先级低于上述I帧的优先级。
具体地,上述应用为流媒体应用,上述应用下发的数据块为视频流的数据块;
上述第一阈值为由视频比特率、上述TCP连接的速率、上述第一队列中的数据量以及第一调节参数确定的阈值,上述第一调节参数为上述TCP连接能够缓存的视频流的数据块的时长;
上述第二阈值为由视频比特率以及第二调节参数确定的阈值,上述第二调节参数为延迟参数,该延迟参数用于表征上述流媒体应用可容忍的延迟程度。
具体地,上述存入单元901,具体用于通过调用目标应用程序编程接口API将上述应用下发的数据块存入上述第一队列。
具体地,上述存入单元901,具体用于通过上述应用的代理将上述应用下发的数据块存入上述第一队列,上述代理为运行在上述操作系统的用户态的一个进程。
可以理解的是,图9A和图9B所示的数据发送设备还用于执行第一实施例(图3)、第二 实施例(图6)、第三实施例(图7)以及第四实施例(图8B)所描述的实现方式,各个单元的具体实现方式,这里不再一一详述。
本申请实施例中,图4所提供的数据优化架构400中的联合调度模块401可具体用于控制存入单元901将应用下发的数据块存入第一队列,其中,第一队列相当于图4中的用户态缓存模块402,内核态缓存控制模块403可用于控制加入单元902将第一队列中的至少一个数据块加入第二队列。
请参见图10,图10是本申请实施例提供的又一种数据发送设备的结构示意图,该数据发送设备为应用于TCP协议的设备,该数据发送设备的操作系统中运行有应用,以及该数据发送设备的操作系统的用户态中的队列称为第一队列,该数据发送设备的操作系统的内核态中,与TCP协议对应的发送缓存队列称为第二队列。
如图10所示,该数据发送设备至少包括处理电路1001、存储介质1002和收发器1003,处理电路1001、存储介质1002和收发器1003通过总线1004相互连接。
存储介质1002包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储介质1002用于相关指令及数据。
收发器1003用于接收和发送数据。具体地,该收发器1003可以包括网卡,或天线等,本申请实施例中,处理电路1001通过该收发器1003,执行通过第二队列,向TCP连接的数据接收设备发送数据的步骤,该数据具体可为数据块,该数据块为流数据。
处理电路1001可以是一个或多个中央处理器(central processing unit,CPU),或者,该处理电路1001可以是一个或多个网络处理器(Network Processor,NP),或者,该处理电路1001还可以是一个或多个应用处理器(Application Processor,AP),或者,该处理电路1001还可以是CPU与NP的组合等,或者,该处理电路1001还可以是CPU与AP的组合等等,本申请实施例不作限定。可选的,该处理电路1001还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
该数据发送设备中的处理电路1001用于读取上述存储介质1002中存储的程序代码,执行以下操作:
将上述应用下发的数据块存入第一队列,上述数据块为流数据,上述第一队列为上述数据发送设备的操作系统的用户态中的队列,上述第一队列用于放置待发送的流数据的数据块;
在第二队列中的数据量满足预设条件的情况下,将上述第一队列中的至少一个数据块加入上述第二队列,上述第二队列为上述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;
通过上述第二队列,向上述TCP连接的数据接收端发送数据。
其中,通过上述第二队列,向上述TCP连接的数据接收端发送数据,具体可以理解为利用收发器通过上述第二队列,向上述TCP连接的数据接收端发送数据。
具体地,上述预设条件为:上述第二队列中的数据量不超过第二阈值,或者,上述第二队列中的数据量的占用比不超过第三阈值。
可选的,该数据发送设备中的处理电路1001还用于读取上述存储介质1002中存储的程序代码,执行如下操作:
在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
可选的,在上述第二队列中的数据量超过上述第二阈值的情况下,或者,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,该数据发送设备中的处理电路1001还用于读取上述存储介质1002中存储的程序代码,执行如下操作:
降低将数据块从上述第一队列加入上述第二队列的速率。
可选的,在上述第二队列中的数据量超过上述第二阈值的情况下,该数据发送设备中的处理电路1001还用于读取上述存储介质1002中存储的程序代码,执行如下操作:
暂停将上述第一队列中的至少一个数据块加入上述第二队列,直到上述第二队列中的数据量不超过上述第二阈值,继续执行上述将上述第一队列中的至少一个数据块加入上述第二队列。
可选的,在上述第二队列中的数据量的占用比超过上述第三阈值的情况下,该数据发送设备中的处理电路1001还用于读取上述存储介质1002中存储的程序代码,执行如下操作:
暂停将上述第一队列中的至少一个数据块加入上述第二队列,直到上述第二队列中的数据量的占用比不超过上述第三阈值,继续执行上述将上述第一队列中的至少一个数据块加入上述第二队列。
可选的,该数据发送设备中的处理电路1001还用于读取上述存储介质1002中存储的程序代码,执行如下操作:
在上述第一队列中的数据量超过第一阈值的情况下,丢弃上述第一队列内优先级较低的数据块。
具体地,上述应用下发的数据块为视频流的数据块,上述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,
在上述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在上述视频流的数据块中包括连续多帧未被其他帧引用,则上述连续多帧中的第一帧的优先级最低。
具体地,在上述B帧、上述P帧和上述I帧被其他帧引用的次数相同的情况下,或者,在上述B帧、上述P帧和上述I帧分别为一组连续多帧中的第一帧的情况下,
上述B帧的优先级低于上述P帧的优先级,上述P帧的优先级低于上述I帧的优先级。
具体地,上述应用为流媒体应用,上述应用下发的数据块为视频流的数据块;
上述第一阈值为由视频比特率、上述TCP连接的速率、上述第一队列中的数据量以及第一调节参数确定的阈值,上述第一调节参数为上述TCP连接能够缓存的视频流的数据块的时长;
上述第二阈值为由上述视频比特率以及第二调节参数确定的阈值,上述第二调节参数为延迟参数,上述延迟参数用于表征上述流媒体应用可容忍的延迟程度。
具体地,上述处理电路1001将上述应用下发的数据块存入第一队列包括:
通过调用目标应用程序编程接口API将上述应用下发的数据块存入上述第一队列。
具体地,上述处理电路1001上述将上述应用下发的数据块存入第一队列包括:
通过上述应用的代理将上述应用下发的数据块存入上述第一队列,上述代理为运行在上述操作系统的用户态的一个进程。
需要说明的是,各个操作的实现还可以对应参照前述实施例所示的方法实施例的相应描 述。
在图10所描述的数据发送设备中,处理电路1001还可用于执行图9A和图9B所示的存入单元901和加入单元902所执行的操作,或者,该处理电路1001还可用于执行图4所示的联合调度模块401和内核态缓存控制模块403所执行的操作。
本申请实施例还提供一种计算机可读存储介质,上述计算机可读存储介质中存储有计算机程序,该计算机程序包括程序指令,该程序指令当被数据发送设备的处理电路执行时,使处理电路执行前述实施例所示的方法流程。
具体地,上述程序指令可被处理电路执行,实现:
将上述应用下发的数据块存入第一队列,上述数据块为流数据,上述第一队列为上述数据发送设备的操作系统的用户态中的队列,上述第一队列用于放置待发送的流数据的数据块;
在第二队列中的数据量满足预设条件的情况下,将上述第一队列中的至少一个数据块加入上述第二队列,上述第二队列为上述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;
通过上述第二队列,向上述TCP连接的数据接收端发送数据。
上述计算机可读存储介质可以是数据发送设备的内部存储单元,例如硬盘或内存。或者上述计算机可读存储介质也可以是上述数据发送设备的外部存储设备,例如数据发送设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等等。
本申请实施例还提供一种计算机程序产品,当上述计算机程序产品在数据发送设备上运行时,使前述实施例所示的方法流程得以实现。
参见图11,图11是以数据发送设备为终端设备为例提供的一种实现方式的结构框图,如图11所示,终端设备110可包括:应用芯片110、存储器115(一个或多个计算机可读存储介质)、射频(RF)模块116、外围系统117。这些部件可在一个或多个通信总线114上通信。
外围系统117主要用于实现终端设备110和用户/外部环境之间的交互功能,主要包括输入输出装置。具体实现中,外围系统117可包括:触摸屏控制器118、摄像头控制器119、音频控制器120以及传感器管理模块121。其中,各个控制器可与各自对应的外围设备(如触摸屏123、摄像头124、音频电路125以及传感器126)耦合。在一些实施例中,触摸屏123可以配置有自电容式的触控面板的触摸屏,也可以是配置有红外线式的触控面板的触摸屏。在一些实施例中,摄像头124可以是3D摄像头。需要说明的,外围系统117还可以包括其他I/O外设。本申请实施例中,该终端设备可以通过摄像头124获得视频流数据,或者,通过音频电路125获得待音频流数据等等。
应用芯片110可集成包括:一个或多个处理器111、时钟模块112以及电源管理模块113。集成于应用芯片110中的时钟模块112主要用于为处理器111产生数据传输和时序控制所需要的时钟。集成于应用芯片110中的电源管理模块113主要用于为处理器111、射频模块116以及外围系统提供稳定的、高精确度的电压。可以理解的是,该终端设备除了应用芯片之外,还可以包括其他芯片,如基带芯片等。
射频(RF)模块116用于接收和发送射频信号,主要集成了接收器和发射器。射频(RF)模块116通过射频信号与通信网络和其他通信设备通信。具体实现中,射频(RF)模块116可包括但不限于:天线系统、RF收发器、一个或多个放大器、调谐器、一个或多个振荡器、 数字信号处理器、CODEC芯片、SIM卡和存储介质等。在一些实施例中,可在单独的芯片上实现射频(RF)模块116。
存储器115与处理器111耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器115可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器115可以存储操作系统(下述简称系统),例如ANDROID,IOS,WINDOWS,或者LINUX等嵌入式操作系统。存储器115还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个终端设备,一个或多个网络设备进行通信。存储器115还可以存储用户接口程序,该用户接口程序可以通过图形化的操作界面将应用程序的内容形象逼真的显示出来,并通过菜单、对话框以及按键等输入控件接收用户对应用程序的控制操作。本申请实施例中,存储器115中还可以存储视频流数据或音频流数据或控制命令等等。
存储器115还可以存储一个或多个应用程序。如图11所示,这些应用程序可包括:社交应用程序(例如Facebook),图像管理应用程序(例如相册),地图类应用程序(例如谷歌地图),浏览器(例如Google Chrome)等等。
应当理解,终端设备110仅为本申请实施例提供的一个例子,并且,终端设备110可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。
在具体实现中,图11所示的终端设备还可以用于执行本申请实施例所提供的终端设备的显示方法,如该终端设备可以用于执行如图3所示的方法,以及其他实施例的实现方式,这里不再一一详述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (24)

  1. 一种发送流数据的方法,其特征在于,所述方法应用于传输控制协议TCP连接的数据发送端,所述数据发送端的操作系统中运行有应用,所述方法包括:
    将所述应用下发的数据块存入第一队列,所述数据块为流数据,所述第一队列为所述数据发送端的操作系统的用户态中的队列,所述第一队列用于放置待发送的流数据的数据块;
    在第二队列中的数据量满足预设条件的情况下,将所述第一队列中的至少一个数据块加入所述第二队列,所述第二队列为所述数据发送端的操作系统的内核态中,TCP协议对应的发送缓存队列;
    所述数据发送端通过所述第二队列,向所述TCP连接的数据接收端发送数据。
  2. 根据权利要求1所述的方法,其特征在于,所述预设条件为:所述第二队列中的数据量不超过第二阈值,或者,所述第二队列中的数据量的占用比不超过第三阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
  4. 根据权利要求2或3所述的方法,其特征在于,在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,所述方法还包括:
    降低将数据块从所述第一队列加入所述第二队列的速率。
  5. 根据权利要求2或3所述的方法,其特征在于,在所述第二队列中的数据量超过所述第二阈值的情况下,所述方法还包括:
    暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量不超过所述第二阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
  6. 根据权利要求2或3所述的方法,其特征在于,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,所述方法还包括:
    暂停将所述第一队列中的至少一个数据块加入所述第二队列,直到所述第二队列中的数据量的占用比不超过所述第三阈值,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
  7. 根据权利要求1至3任意一项所述的方法,其特征在于,所述方法还包括:
    在所述第一队列中的数据量超过第一阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述应用下发的数据块为视频流的数据块,所述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编 码I帧;其中,
    在所述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在所述视频流的数据块中包括连续多帧未被其他帧引用,则所述连续多帧中的第一帧的优先级最低。
  9. 根据权利要求8所述的方法,其特征在于,在所述B帧、所述P帧和所述I帧被其他帧引用的次数相同的情况下,或者,在所述B帧、所述P帧和所述I帧分别为一组连续多帧中的第一帧的情况下,
    所述B帧的优先级低于所述P帧的优先级,所述P帧的优先级低于所述I帧的优先级。
  10. 根据权利要求1至9任意一项所述的方法,其特征在于,所述将所述应用下发的数据块存入第一队列包括:
    通过调用目标应用程序编程接口API将所述应用下发的数据块存入所述第一队列。
  11. 根据权利要求1至9任意一项所述的方法,其特征在于,所述将所述应用下发的数据块存入第一队列包括:
    通过所述应用的代理将所述应用下发的数据块存入所述第一队列,所述代理为运行在所述操作系统的用户态的一个进程。
  12. 一种数据发送设备,其特征在于,所述数据发送设备为应用传输控制协议TCP的设备,所述数据发送设备的操作系统中运行有应用,所述数据发送设备包括:
    存入单元,用于将所述应用下发的数据块存入第一队列,所述数据块为流数据,所述第一队列为所述数据发送设备的操作系统的用户态中的队列,所述第一队列用于放置待发送的流数据的数据块;
    加入单元,用于在所述第二队列中的数据量满足预设条件的情况下,将所述第一队列中的至少一个数据块加入所述第二队列,所述第二队列为所述数据发送设备的操作系统的内核态中,TCP协议对应的发送缓存队列;
    发送单元,用于通过所述第二队列,向所述TCP连接的数据接收设备发送数据。
  13. 根据权利要求12所述的数据发送设备,其特征在于,所述预设条件为:所述第二队列中的数据量不超过第二阈值,或者,所述第二队列中的数据量的占用比不超过第三阈值。
  14. 根据权利要求13所述的数据发送设备,其特征在于,所述数据发送设备还包括:
    丢弃单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
  15. 根据权利要求13或14所述的数据发送设备,其特征在于,所述数据发送设备还包括:
    降低速率单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下,或者,在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,降低将数据块从所述第一队列加入所述第二队列的速率。
  16. 根据权利要求13或14所述的数据发送设备,其特征在于,所述数据发送设备还包括:
    暂停单元,用于在所述第二队列中的数据量超过所述第二阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列;
    所述加入单元,还用于在所述第二队列中的数据量不超过所述第二阈值的情况下,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
  17. 根据权利要求13或14所述的数据发送设备,其特征在于,所述数据发送设备还包括:
    暂停单元,用于在所述第二队列中的数据量的占用比超过所述第三阈值的情况下,暂停将所述第一队列中的至少一个数据块加入所述第二队列;
    所述加入单元,还用于在所述第二队列中的数据量的占用比不超过所述第三阈值的情况下,继续执行所述将所述第一队列中的至少一个数据块加入所述第二队列。
  18. 根据权利要求14所述的数据发送设备,其特征在于,
    所述丢弃单元,还用于在所述第一队列中的数据量超过第一阈值的情况下,丢弃所述第一队列内优先级较低的数据块。
  19. 根据权利要求12至18任意一项所述的数据发送设备,其特征在于,所述应用下发的数据块为视频流的数据块,所述视频流的数据块包括双向预测编码B帧,帧间预测编码P帧以及帧内编码I帧;其中,
    在所述视频流的数据块中,被其他帧引用的次数越多的帧的优先级越高;和/或,在所述视频流的数据块中包含连续多帧未被其他帧引用,则所述连续多帧中的第一帧的优先级最低。
  20. 根据权利要求19所述的数据发送设备,其特征在于,在所述B帧、所述P帧和所述I帧被其他帧引用的次数相同的情况下,或者,在所述B帧、所述P帧和所述I帧分别为一组连续多帧中的第一帧的情况下,
    所述B帧的优先级低于所述P帧的优先级,所述P帧的优先级低于所述I帧的优先级。
  21. 根据权利要求12至20任意一项所述的数据发送设备,其特征在于,
    所述存入单元,具体用于通过调用目标应用程序编程接口API将所述应用下发的数据块存入所述第一队列。
  22. 根据权利要求12至20任意一项所述的数据发送设备,其特征在于,
    所述存入单元,具体用于通过所述应用的代理将所述应用下发的数据块存入所述第一队列,所述代理为运行在所述操作系统的用户态的一个进程。
  23. 一种数据发送设备,其特征在于,所述数据发送设备为应用传输控制协议TCP的设备,所述数据发送设备的操作系统中运行有应用,所述数据发送设备包括:处理电路、存储介质和收发器;其中,所述处理电路、所述存储介质和所述收发器通过线路互联,所述存储 介质中存储有程序指令;所述程序指令被所述处理电路执行时,使所述处理电路执行如权利要求1到11任意一项所述的方法。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被数据发送设备的处理电路执行时,使所述处理电路执行权利要求1至11任意一项所述的方法。
PCT/CN2019/073922 2018-02-07 2019-01-30 发送流数据的方法及数据发送设备 WO2019154221A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810132429.4 2018-02-07
CN201810132429.4A CN110121114B (zh) 2018-02-07 2018-02-07 发送流数据的方法及数据发送设备

Publications (1)

Publication Number Publication Date
WO2019154221A1 true WO2019154221A1 (zh) 2019-08-15

Family

ID=67519674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073922 WO2019154221A1 (zh) 2018-02-07 2019-01-30 发送流数据的方法及数据发送设备

Country Status (2)

Country Link
CN (1) CN110121114B (zh)
WO (1) WO2019154221A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813579A (zh) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 一种通信方法、装置及可读存储介质,一种文件系统
CN113905196A (zh) * 2021-08-30 2022-01-07 浙江大华技术股份有限公司 视频帧管理方法、视频录像机和计算机可读存储介质
CN114371810A (zh) * 2020-10-15 2022-04-19 中国移动通信集团设计院有限公司 Hdfs的数据存储方法及装置
CN115334156A (zh) * 2021-04-26 2022-11-11 深信服科技股份有限公司 报文的处理方法、装置、设备、存储介质
CN117098191A (zh) * 2023-07-06 2023-11-21 佰路威科技(上海)有限公司 数据流调度控制方法及相关设备
CN118337715A (zh) * 2024-06-12 2024-07-12 江西科晨洪兴信息技术有限公司 一种物联网数据发送方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260760B (zh) * 2020-01-10 2023-06-20 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN111245736B (zh) * 2020-04-28 2020-08-07 上海飞旗网络技术股份有限公司 一种保持应用稳定支付的数据包速率控制方法
CN111935497B (zh) * 2020-09-18 2021-01-12 武汉中科通达高新技术股份有限公司 一种用于交警系统的视频流管理方法和数据服务器
CN112860321A (zh) * 2021-01-29 2021-05-28 上海阵量智能科技有限公司 命令下发方法、处理设备及存储介质
CN112988413A (zh) * 2021-02-07 2021-06-18 杭州复杂美科技有限公司 交易批量广播动态调节方法、计算机设备和存储介质
CN114422822B (zh) * 2021-12-27 2023-06-06 北京长焜科技有限公司 一种支持自适应hdmi编码的无人机数图传输控制方法
CN114500403A (zh) * 2022-01-24 2022-05-13 中国联合网络通信集团有限公司 一种数据处理方法、装置及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102281A (zh) * 2007-08-16 2008-01-09 中兴通讯股份有限公司 移动通信系统中大量数据上报时的数据处理方法
CN101699795A (zh) * 2009-10-29 2010-04-28 中兴通讯股份有限公司 一种报文拥塞处理方法及系统
CN101770412A (zh) * 2010-01-22 2010-07-07 华中科技大学 一种连续数据缓存系统及其数据缓存方法
CN102819497A (zh) * 2012-05-31 2012-12-12 华为技术有限公司 一种内存分配方法、装置及系统
CN104317530A (zh) * 2014-10-21 2015-01-28 浪潮电子信息产业股份有限公司 远程容灾技术中一种数据捕获方法的设计
CN104811391A (zh) * 2014-01-24 2015-07-29 中兴通讯股份有限公司 数据包的处理方法、装置及服务器

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1996271B (zh) * 2006-12-30 2013-08-28 华为技术有限公司 一种数据传输的方法及系统
CN101616194B (zh) * 2009-07-23 2012-07-11 中国科学技术大学 主机网络性能优化系统及方法
CN102375789B (zh) * 2010-08-09 2014-05-28 中标软件有限公司 一种通用网卡非缓存的零拷贝方法及零拷贝系统
CN102355462B (zh) * 2011-10-09 2015-05-20 大唐移动通信设备有限公司 一种实现tcp传输的方法及装置
CN103544324B (zh) * 2013-11-11 2017-09-08 北京搜狐新媒体信息技术有限公司 一种内核态的数据访问方法、装置及系统
CN103905420B (zh) * 2013-12-06 2017-10-10 北京太一星晨信息技术有限公司 一种协议栈和应用程序间传输数据的方法及装置
US10042682B2 (en) * 2014-01-30 2018-08-07 Hewlett Packard Enterprise Development Lp Copy message from application buffer to send buffer within kernel
CN105512286B (zh) * 2015-11-27 2019-09-24 浪潮(北京)电子信息产业有限公司 一种读写数据免拷贝系统与方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102281A (zh) * 2007-08-16 2008-01-09 中兴通讯股份有限公司 移动通信系统中大量数据上报时的数据处理方法
CN101699795A (zh) * 2009-10-29 2010-04-28 中兴通讯股份有限公司 一种报文拥塞处理方法及系统
CN101770412A (zh) * 2010-01-22 2010-07-07 华中科技大学 一种连续数据缓存系统及其数据缓存方法
CN102819497A (zh) * 2012-05-31 2012-12-12 华为技术有限公司 一种内存分配方法、装置及系统
CN104811391A (zh) * 2014-01-24 2015-07-29 中兴通讯股份有限公司 数据包的处理方法、装置及服务器
CN104317530A (zh) * 2014-10-21 2015-01-28 浪潮电子信息产业股份有限公司 远程容灾技术中一种数据捕获方法的设计

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813579A (zh) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 一种通信方法、装置及可读存储介质,一种文件系统
CN114371810A (zh) * 2020-10-15 2022-04-19 中国移动通信集团设计院有限公司 Hdfs的数据存储方法及装置
CN114371810B (zh) * 2020-10-15 2023-10-27 中国移动通信集团设计院有限公司 Hdfs的数据存储方法及装置
CN115334156A (zh) * 2021-04-26 2022-11-11 深信服科技股份有限公司 报文的处理方法、装置、设备、存储介质
CN113905196A (zh) * 2021-08-30 2022-01-07 浙江大华技术股份有限公司 视频帧管理方法、视频录像机和计算机可读存储介质
CN113905196B (zh) * 2021-08-30 2024-05-07 浙江大华技术股份有限公司 视频帧管理方法、视频录像机和计算机可读存储介质
CN117098191A (zh) * 2023-07-06 2023-11-21 佰路威科技(上海)有限公司 数据流调度控制方法及相关设备
CN118337715A (zh) * 2024-06-12 2024-07-12 江西科晨洪兴信息技术有限公司 一种物联网数据发送方法及系统

Also Published As

Publication number Publication date
CN110121114B (zh) 2021-08-27
CN110121114A (zh) 2019-08-13

Similar Documents

Publication Publication Date Title
WO2019154221A1 (zh) 发送流数据的方法及数据发送设备
CN111628847B (zh) 数据传输方法及装置
US9445150B2 (en) Asynchronously streaming video of a live event from a handheld device
CN109600610B (zh) 一种数据编码方法、终端及计算机可读存储介质
US9585062B2 (en) System and method for implementation of dynamic encoding rates for mobile devices
US10045089B2 (en) Selection of encoder and decoder for a video communications session
US20120039391A1 (en) System and method for transmission of data signals over a wireless network
US20110122869A1 (en) Method of Transmitting Data in a Communication System
CN113992967B (zh) 一种投屏数据传输方法、装置、电子设备及存储介质
CN111225209B (zh) 视频数据推流方法、装置、终端及存储介质
JP2015536594A (ja) 積極的なビデオフレームドロップ
JP7496022B2 (ja) クライアント、サーバ、受信方法及び送信方法
CN113068001B (zh) 基于级联摄像机的数据处理方法、装置、设备和介质
WO2020199929A1 (zh) 分发数据的方法和网络设备
US20070127437A1 (en) Medium signal transmission method, reception method, transmission/reception method, and device
CN115834556B (zh) 数据传输方法、系统、设备、存储介质及程序产品
CN113905257A (zh) 视频码率切换方法、装置、电子设备及存储介质
CN113316263A (zh) 数据传输方法、装置、设备和存储介质
US11134114B2 (en) User input based adaptive streaming
CN116436865A (zh) 多路径传输的重注入控制方法、电子设备及存储介质
CN113242446B (zh) 视频帧的缓存方法、转发方法、通信服务器及程序产品
CN106210867A (zh) 一种数据分享的方法和装置
Arun et al. Innovative solution for a telemedicine application
KR102419087B1 (ko) 미디어 스트리밍 제어 장치 및 방법
US20240298051A1 (en) Data relay apparatus, distribution system, data relay method, and computer-readable medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19750730

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19750730

Country of ref document: EP

Kind code of ref document: A1