WO2021238764A1 - 一种基于网内缓存的差异化传输方法 - Google Patents

一种基于网内缓存的差异化传输方法 Download PDF

Info

Publication number
WO2021238764A1
WO2021238764A1 PCT/CN2021/094892 CN2021094892W WO2021238764A1 WO 2021238764 A1 WO2021238764 A1 WO 2021238764A1 CN 2021094892 W CN2021094892 W CN 2021094892W WO 2021238764 A1 WO2021238764 A1 WO 2021238764A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
cache
data
buffer
priority
Prior art date
Application number
PCT/CN2021/094892
Other languages
English (en)
French (fr)
Inventor
李清
沈耿彪
江勇
吴宇
Original Assignee
南方科技大学
鹏城实验室
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南方科技大学, 鹏城实验室 filed Critical 南方科技大学
Publication of WO2021238764A1 publication Critical patent/WO2021238764A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes

Definitions

  • the invention relates to the technical field of data transmission, in particular to a differential transmission method based on intra-network buffering.
  • the buffer for example, modify the management mechanism of the data packet in the buffer or increase the capacity of the buffer.
  • the queuing time of data packets in the network will increase, and long-term waiting is prone to cause the protocol stack to time out, resulting in a decrease in network transmission performance.
  • the technical problem to be solved by the present invention is to provide a differentiated transmission method based on in-network cache in view of the shortcomings of the prior art.
  • the first aspect of the embodiments of the present invention provides a differentiated transmission method based on in-network buffering, the method including:
  • the forwarding device When the forwarding device receives the data packet, it determines the cache table according to the forwarding port corresponding to the data packet;
  • determining the cache table according to the forwarding port corresponding to the data packet specifically includes:
  • the forwarding device When the forwarding device receives the data packet, obtain the priority corresponding to the data packet;
  • the data packet is imported into the buffer to forward the data packet.
  • importing the data packet into a buffer to forward the data packet specifically includes:
  • the cache identifier is the first cache identifier, forward the copied data packet of the data packet to a cache mechanism, and import the data packet into the buffer to forward the transmission data;
  • the cache identifier of the cache table is determined according to the buffer utilization rate, wherein the first cache identifier indicates that the buffer utilization rate is greater than or equal to the cache factor, and the second cache identifier indicates that the buffer utilization rate is less than the cache factor. factor.
  • the forwarding of the data packet to a caching mechanism based on the forwarding port specifically includes:
  • a cache matching item is generated according to the data packet, and the cache matching item is stored in a cache table corresponding to the forwarding port.
  • the forwarding the data packet to a caching mechanism based on the forwarding port includes:
  • the cache mechanism includes the data block corresponding to the data information, associating the data packet with the data block;
  • the cache mechanism does not include the data block corresponding to the data information, the data block corresponding to the data information is established, and the data packet is associated with the data block.
  • the method further includes:
  • the cache mechanism periodically detects the utilization rate of the buffer area
  • the cache mechanism selects several data packets according to the expected time slice of the data packets of each data block stored by itself, and injects the selected several data packets into the buffer.
  • the cache mechanism maintains its own stored data packets based on time slices, and the method includes:
  • the caching mechanism detects all the timeout data packets whose expected time has expired among all the data packets stored by the caching mechanism at every interval of the time slice;
  • the injection of all selected timeout data into the buffer specifically includes:
  • the cache mechanism detects whether the operation of injecting data packets into the buffer is being performed
  • the operation of injecting data packets into the buffer will be stopped, and all the selected timeout data will be injected into the buffer.
  • a second aspect of the embodiments of the present invention provides a dumping device, wherein the dumping device is configured with a caching mechanism; the dumping device is used to execute any of the above-mentioned differential transmission methods based on in-network caching Steps in.
  • the present invention provides a differentiated transmission method based on in-network buffering.
  • the method includes determining the buffering according to the forwarding port corresponding to the data packet when the forwarding device receives the data packet. Table; if the data packet matches the cache table, the data packet is forwarded to a caching mechanism based on the forwarding port, wherein the caching mechanism is configured in the forwarding device; if the data packet does not match In the cache table, the data packet is imported into the buffer to forward the data packet.
  • the invention realizes the in-network buffering of the transmission data by setting the forwarding mechanism in the forwarding equipment, and coordinates the buffer and the buffering equipment through the buffer table, realizes the differentiated quantity transmission, and overcomes the problem of upstream and downstream transmission mismatch .
  • Fig. 1 is a flowchart of a differential transmission method based on in-network buffering provided by the present invention.
  • FIG. 2 is a schematic diagram of a flow example of a differential transmission method based on in-network buffering provided by the present invention.
  • Fig. 3 is a schematic diagram of the intra-network cache architecture in the forwarding device provided by the present invention.
  • FIG. 4 is a diagram of the storage relationship of data packets of the cache mechanism in the forwarding device provided by the present invention.
  • Fig. 5 is a flowchart of a process of processing cached data by a cache mechanism in a forwarding device provided by the present invention.
  • the present invention provides a differentiated transmission method based on in-network buffering.
  • the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not used to limit the present invention.
  • This embodiment provides a differentiated transmission method based on in-network caching. As shown in Fig. 1 and Fig. 2, the method includes:
  • the forwarding device When the forwarding device receives a data packet, it determines a cache table according to the forwarding port corresponding to the data packet.
  • the forwarding device is a terminal device used for data transmission.
  • an in-network caching architecture for the forwarding device based on differentiated transmission requirements is constructed, and a caching mechanism is configured in the forwarding device, which is implemented by the caching mechanism.
  • Cache function ;
  • a cache table is set in the forwarding device, and the data cache is managed through the cache table. The data is cached through the cache table to determine whether the data packet is cached in the cache mechanism or sent to the buffer.
  • the forwarding port is a forwarding port configured by the forwarding device, and data packets can be sent to the buffer through the forwarding port.
  • the forwarding device is configured with several forwarding ports, and each forwarding port in the several forwarding ports is used to forward data packets, and each forwarding port in the several forwarding ports corresponds to a cache table, and the cache table corresponding to each conversion port is different. That is, there is a one-to-one correspondence between the forwarding end and the cache table. Based on this, upon receiving a data packet, the cache table corresponding to the data can be determined according to the forwarding port corresponding to the data packet.
  • the cache table includes several cache entries, and each cache entry includes header information and a forwarding port.
  • the header information is used to match data packets, and the forwarding port is used to determine the forwarding port corresponding to the data matching the header information. It is understandable that after receiving the data packet, the header information of the data packet is matched with the header information in each cache entry in the cache table to determine whether the data packet matches the cache table.
  • the header information of the cache entry in the cache table matches the header information of the data packet, it means that the data packet matches the cache table; when the header information of the cache entry does not match the header information of the data packet in the cache table , Indicating that the data packet does not match the cache table; in addition, when the data packet matches the cache table, the forwarding port carried by the matched cache entry is used as the forwarding port corresponding to the data packet.
  • the forwarding device After a data packet is received, it is required whether the data packet is a data packet forwarded by a forwarding device. Based on this, the forwarding device stores a forwarding table, and the forwarding table stores data packets that can be forwarded by the forwarding device. When the forwarding device receives a data packet, it parses the data packet to obtain the header data of the data packet, and matches the header data in the forwarding table. If the header data is matched successfully, the buffer is determined according to the forwarding port corresponding to the data packet Table; if the header data match fails, the data packet is discarded.
  • the data packet carries a priority
  • the priority is used to reflect the priority of the data packet to be forwarded, and the higher the priority corresponding to the data packet, the data packet is forwarded The higher the priority.
  • the priority corresponding to data packet A is the first priority
  • the priority corresponding to data packet B is the second priority
  • the first priority is higher than the second priority, so that the priority of the data packet A being forwarded is higher than The priority of packet B to be forwarded.
  • the priority may be configured by the end-side server for the data packet, where the priority is divided based on the time delay length of the data packet, and each priority is for a time delay period; this can be based on the data packet
  • the time delay is long, determine the data includes the time delay period in which it is in, and use the priority corresponding to the delay time period as the priority corresponding to the data packet; after obtaining the priority corresponding to the data packet, it will be prioritized
  • the priority label corresponding to the level is configured on the data packet, so that the forwarding device can determine the corresponding priority according to the priority identifier carried in the data packet after obtaining the data packet, so that the corresponding forwarding of the data packet can be determined based on the priority port.
  • the priority is determined based on the time delay length of the data packet
  • the time delay corresponding to the data packet can be configured in the data packet.
  • the forwarding device can obtain the time delay corresponding to the data packet, so as to forward the data packet within the time delay, and avoid the problem of timeout retransmission caused by the timeout of the data packet.
  • the data packet received by the forwarding device carries a priority and a long time delay
  • the forwarding device can determine the forwarding port corresponding to the data packet based on the priority, and the latest data packet needs to be forwarded based on the delay requirement Time, so that the forwarding device can schedule data packets based on priority and delay requirements, which can improve the transmission capacity of the forwarding device.
  • the data packets corresponding to each priority are set with different TCP protocol stack RTO values, and the delay of the data packet is represented by the RTO value, and the RTO values corresponding to each priority are different, for example, 100ms, 200ms, 1s, 2s, and so on.
  • the end-side server is divided into four priority levels according to the delay needs, and they are recorded as the first priority, the second priority, the third priority, and the fourth priority.
  • the first priority is the highest.
  • the second priority is higher than the third priority
  • the third priority is higher than the fourth priority
  • the time delay corresponding to each priority from high to low increases in order, for example, the first The time delay corresponding to the priority is smaller than the time delay corresponding to the second priority.
  • the priority can be marked in the DSCP field of the data packet (for example, 4 bits of the DSCP field are selected as the priority storage bit) to obtain the DSCP field carrying priority class.
  • the priority of the first priority is marked as 0x01
  • the priority of the second priority is marked as 0x02
  • the priority of the third priority is marked as 0x03
  • the priority of the fourth priority is marked as 0x04.
  • the end-side server uses several priority queues for data transmission during the configuration of the network card data packet transmission process, and the end-side server maintains several priority queues during the network card transmission process, and uses the weighted polling method for queue scheduling.
  • each priority queue in several priority queues corresponds to a priority, and the priority corresponding to each priority queue is different from each other; each priority queue is configured with a queue weight, and each queue weight is between 0 and 1 Value, and the sum of the weights of each queue is 1.
  • the first priority corresponds to 0.5
  • the second priority corresponds to 0.25
  • the third priority corresponds to 0.15
  • the fourth priority corresponds to 0.1.
  • determining a cache table according to a forwarding port corresponding to the data packet specifically includes:
  • the preset condition is pre-configured and used as a basis for the data packets that the beam can be buffered.
  • the priority meets the preset condition, it means that the data packet corresponding to the priority can be cached.
  • the priority does not meet the preset condition, it means that the data corresponding to the priority cannot be cached and needs to be forwarded directly.
  • the preset condition may be a priority threshold. When the priority is higher than the priority threshold, the priority does not meet the preset condition; when the priority is lower than or equal to the priority threshold, the priority meets the condition. This is because the time delay corresponding to the priority from high to low gradually increases.
  • the corresponding time delay of the data packet is small, and it needs to be forwarded directly to avoid caching.
  • Ultrasonic retransmission problem For low-priority data packets, the time delay corresponding to the data packet is long. At this time, the data packet can be stored in the buffer mechanism, which can achieve differentiated transmission and improve the transmission capacity of the protocol stack.
  • the preset conditions may be Is the third priority; for the first priority and the second priority, because the priority is higher than the third priority, the first priority and the second priority do not meet the preset conditions; for the third priority and the second priority Four priorities, since the third priority and the fourth priority are less than or equal to the third priority, the third priority and the fourth priority meet the preset condition.
  • the first priority data packet and the second priority data packet are directly imported into the buffer to forward the data packet; the third priority data packet and the fourth priority data packet are based on the data
  • the forwarding port corresponding to the packet determines the cache table.
  • the forwarding device manages the received data packets in the form of a priority queue, and performs priority queue scheduling for each data packet in a weighted polling mode, and each priority queue is configured with a scheduling weight.
  • multiple forwarding devices are configured Priority queue, set an absolute priority queue to serve data packets without priority tags, set 4 relative priority queues to serve 4 priority data, queue scheduling adopts a weighted polling method, and each priority queue is configured with a queue weight ,
  • the weight of each queue is a value between 0-1, and the sum of the weights of each queue is 1.
  • the priority queue weights from high to low are 0.5, 0.25, 0.15, and 0.1 respectively.
  • the forwarding device is configured with a default priority queue, which is used to schedule data packets that do not carry priority (for example, communication data packets between forwarding devices such as OSPF and RIP), and is the default priority
  • the priority corresponding to the level queue is higher than any packet carrying priority.
  • the buffer mechanism is pre-configured for the forwarding device, and is used to buffer the partial data packets received by the forwarding device and schedule the stored data packets.
  • the cache mechanism may be a NetFPGA, which is installed in a forwarding device through a forwarding device expansion interface, and the NetFPGA can be used as a cache resource with a scheduling function.
  • the caching mechanism schedules data packets in a data block structure, and the caching mechanism periodically (time slice granularity) generates a data block for each data block maintained, and associates it with the corresponding data block. At the beginning of each time slice, the expected time slice of the timeout in the data block is reduced by one.
  • the forwarding of the data packet to a caching mechanism based on the forwarding port specifically includes:
  • the data packet type is a duplicate data packet, generating a cache matching item according to the data packet, and storing the cache matching item in a cache table corresponding to the forwarding port;
  • the cache mechanism When the cache mechanism includes the data block corresponding to the data information, associating the data packet with the data block;
  • the cache mechanism When the cache mechanism does not include the data block corresponding to the data information, the data block corresponding to the data information is established, and the data packet is associated with the data block.
  • the data packet type is used to reflect the purpose of the data packet; the data packet type includes a replicated data packet and a non-replicated data packet.
  • the data packet type indicates that the data packet is used A cache entry corresponding to the data packet is established in the cache table; when the data packet type is a non-replicated data packet, it is indicated that the data packet is used to transfer to the cache mechanism.
  • the data packet type can be determined based on the data packet type pre-configured in the data packet. For example, 4 bits are configured in the DSCP of the data packet to indicate the data type.
  • the data packet header is extracted Information (ie: source and destination IP, source and destination port number, and protocol number), generate a cache match item, and insert the cache match item into the cache table of the corresponding forwarding port; when the highest bit of the 4bit field is 0, it corresponds to the priority
  • the 4bit field extracts the priority of the data (for example, 0x03), and extracts the header information of the data packet (ie: source and destination IP, source and destination port number and protocol number) to obtain the header data; transfer the header data of the organization to the local A matching search is performed in the cached block data.
  • the transfer mechanism If the corresponding data block is found, the data packet is associated with the latest data piece of the data block, and the corresponding data block and the statistical information in the data piece are updated; if the corresponding data block is not found , The transfer mechanism generates the data block corresponding to the data packet and generates a new data piece.
  • the buffer is denoted as a buffer, and the buffer is configured with a buffer factor and an injection factor, where the buffer factor is used to reflect the utilization rate of the buffer, and the injection factor is used to reflect the occupancy rate of the buffer occupied by injected data.
  • the buffer factor is a basis for the buffer status of the beam; when the buffer factor is greater than or equal to the buffer factor, the buffer is in a saturated state, and when the buffer factor is less than the buffer factor, the buffer is in an unsaturated state.
  • the buffer when the buffer is in a saturated state, the data packet needs to be buffered; when the buffer is in an unsaturated state, the buffer can be directly used for forwarding.
  • the cache identifier of the cache table corresponding to the buffer needs to be determined, and the cache identifier is used to reflect the state of the buffer.
  • importing the data packet into a buffer to forward the data packet specifically includes:
  • the cache identifier is the first cache identifier, forward the copied data packet of the data packet to a cache mechanism, and import the data packet into the buffer to forward the transmission data;
  • the cache identifier of the cache table is determined according to the buffer utilization rate, where the first cache identifier indicates that the buffer utilization is greater than or equal to the cache factor, and the second cache identifier indicates that the buffer utilization rate is less than the cache factor.
  • the cache identifier of the cache table is determined based on the utilization of the buffer corresponding to the forwarding port, and the process of determining the cache identifier may be: the cache mechanism periodically detects the utilization of the buffer of each forwarding port. If the utilization rate exceeds the cache factor, the cache identifier of the corresponding cache table is set to 1; conversely, if the utilization rate exceeds the cache factor, the cache identifier of the corresponding cache table is set to 0.
  • the cache identifier of the cache table is the second cache identifier
  • the data packet is directly imported into the queue with the port buffer priority of 3, and waits for forwarding.
  • the cache identifier of the cache table is the first cache identifier
  • the method further includes:
  • the cache mechanism periodically detects the utilization rate of the buffer area
  • the cache mechanism selects several data packets according to the expected time slice of the data packets of each data block stored by itself, and injects the selected several data packets into the buffer.
  • the cache mechanism maintains the data packets of all data blocks by using the pointer array; when the buffer utilization rate is less than the cache factor, it sequentially selects the corresponding data packets according to the ascending order of the expected timeout time slices of the data packets of all the data blocks to be maintained. Inject several selected data packets into the buffer.
  • the maximum data volume corresponding to several data packets is 0.6 times of the large buffer space, and the injection speed corresponding to several data packets is the maximum transmission between buffer and cache. Speed/number of ports.
  • the cache mechanism maintains its own stored data packets based on time slices, and the method includes:
  • the caching mechanism detects all the timeout data packets whose expected time has expired among all the data packets stored by the caching mechanism at every interval of the time slice;
  • the cache mechanism detects whether the operation of injecting data packets into the buffer is being performed
  • the caching mechanism detects that all time-out data packets whose expected time has expired in all the data packets stored by the caching mechanism at every interval of the time slice is generated in the caching mechanism for storing data packets, the caching mechanism detects that it has stored The expected time slices of all data slices expected to time out, select all data slices whose expected time slices are 0, and sort the data slices in ascending order according to the priority value of their corresponding data blocks, with a given injection speed (for example, The maximum transfer speed between buffer and cache/number of ports) sequentially send data to the buffer.
  • a given injection speed for example, The maximum transfer speed between buffer and cache/number of ports
  • the buffer when injecting a timeout packet into the buffer, it detects whether the operation of injecting a packet into the buffer is being performed. If the operation of injecting a packet into the buffer is performed, the execution of the injecting packet into the buffer is stopped. Inject all the selected timeout data into the buffer, and complete the timeout data injection in this time slice, then continue to perform the stopped operation of injecting data packets into the buffer, and several time slices have not completed the timeout Data injection, discard the timeout data that has not been injected, and do not continue the stopped operation of injecting data packets into the buffer in this time slice, and restart the injection strategy for the next time slice.
  • this embodiment provides a forwarding device configured to execute the application and forwarding device of the differential transmission method based on in-network caching described in the foregoing embodiment.
  • the forwarding device is equipped with a cache mechanism, and the forwarding device stores a forwarding table and several cache tables.
  • the forwarding table stores the flow information flow of the data packet and the forwarding port port corresponding to the data packet.
  • a number of cache tables correspond to a number of forwarding ports configured by the forwarding device, and each cache table stores data packet flow information flow.
  • the cache mechanism is configured with a number of data blocks.
  • the data block records the ID of the data block, the priority of the data block, the time slice when the data block arrives, the number of existing data slices, and the number of data blocks in the data block.
  • the forwarding device is configured with a cache strategy, and the cache strategy includes a cache data management strategy, a cache strategy for transferring data from a buffer to a cache (caching mechanism), and an injection strategy for transferring data from a cache (caching mechanism) to a buffer when the port load is low. And the injection strategy of transferring data from the cache (caching mechanism) to the buffer when the cached data expires.
  • the cache data management strategy is: when the cache mechanism receives the data packet, it extracts the flow information (packet information) carried in the data packet, and detects whether the cache mechanism has a data block corresponding to the flow information; The data packet is stored in the latest data piece in the data block; if it does not exist, a data block is generated for the data packet, a data piece is added, and the data packet is stored in the data piece.
  • packet information packet information
  • the caching strategy for transferring data from the buffer to the cache specifically includes: pre-setting the cache factor of the forwarding port, and comparing the buffer utilization with the corresponding forwarding factor to update the cache identification of each forwarding port; data packet matching cache table , If the match is successful, judge whether the priority of the data packet belongs to the priority that can be cached, if it is to insert the data packet information corresponding to the data packet into the buffer table; otherwise, put the data packet into the port buffer; if the match fails, then Put the data packet into the port buffer, and continue to perform the next data packet matching.
  • the injection strategy for transferring data from the cache (caching mechanism) to the buffer when the forwarding port is under low load specifically includes: presetting the cache factor of the forwarding port, and comparing the buffer utilization with the corresponding forwarding factor to update the cache identification of each forwarding port; If the cache identifier of the forwarding port is reset (set to 0); check the cache data corresponding to the forwarding port in the cache (caching mechanism), if there is cached data, according to the expected buffer growth, the cached data information in the cache, and the cache Calculate the expected amount of injected data according to the transmission capacity between the buffer and the buffer, calculate the injection speed according to the security utilization constraint of the expected buffer state, and sort the data slices in the cache according to the expected timeout time slice in ascending order, and collect them in the ascending time slice sequence For data packets that do not exceed the expected amount of injected data, the injected data packets in the selected cache are imported into the buffer according to the calculated injection rate.
  • the injection strategy for transferring data from the cache (caching mechanism) to the buffer when the cached data times out includes: the forwarding device maintains a timeout event queue according to the granularity of the time slice; at the beginning of each time slice, the forwarding device detects the timeout event queue, and if there is a timeout event, Then collect all data slices corresponding to the timeout event; sort the data slices in descending order according to the corresponding traffic priority; use the maximum feasible transmission bandwidth between the cache and buffer to sequentially transmit the collected data slices; if there is remaining data at the end of the time slice If it is transmitted, it is directly discarded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种基于网内缓存的差异化传输方法,包括当转发设备接收到数据包时,根据数据包对应的转发端口确定缓存表;若数据包匹配缓存表,则基于转发端口将数据包转发至缓存机构,其中,缓存机构配置于转发设备内;若数据包未匹配缓存表,则将数据包导入缓冲区内,以转发数据包。

Description

一种基于网内缓存的差异化传输方法 技术领域
本发明涉及数据传输技术领域,特别涉及一种基于网内缓存的差异化传输方法。
背景技术
目前为了提高网络传输性能,普遍是通对缓冲区buffer进行改进,例如,修改缓冲区buffer中数据包的管理机制或增大缓冲区buffer的容量。然而,在缓冲区buffer容量增大后,会增大数据包在网络中的排队时间,容易出现长时间等待的情况导致协议栈出现超时,从而造成网络传输性能下降。
因而现有技术还有待改进和提高。
发明内容
本发明要解决的技术问题在于,针对现有技术的不足,提供一种基于网内缓存的差异化传输方法。
为了解决上述技术问题,本发明实施例第一方面提供一种基于网内缓存的差异化传输方法,所述方法包括:
当转发设备接收到数据包时,根据所述数据包对应的转发端口确定缓存表;
若所述数据包匹配所述缓存表,则基于所述转发端口将所述数据包转发至缓存机构,其中,所述缓存机构配置于所述转发设备内;
若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包。
在一个实施例中,所述当转发设备接收到的数据包时,根据所述数据包对应的转发端口确定缓存表具体包括:
当转发设备接收到数据包时,获取所述数据包对应的优先级;
若所述优先级满足预设条件,则根据所述数据包对应的转发端口确定缓存表;
若所述优先级未满足预设条件,则将所述数据包导入缓冲区内,以转发所述数据包。
在一个实施例中,所述若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包具体包括:
若所述数据包未匹配所述缓存表,获取所述缓存表的缓存标识;
当所述缓存标识为第一缓存标识时,将所述数据包的复制数据包转发至缓存机构,并将所述数据包导入缓冲区内,以转发所述传输数据;
当所述缓存标识为第二缓存标识时,将所述数据包导入缓冲区内,以转发所述数据包。
在一个实施例中,所述缓存表的缓存标识为根据缓冲区利用率确定的,其中,第一缓存标识表示缓冲区利用率大于或等于缓存因子,第二缓存标识表示缓冲区利用率小于缓存因子。
在一个实施例中,所述基于所述转发端口将所述数据包转发至缓存机构具体包括:
获取所述数据包对应的数据包类型;
若数据包类型为复制数据包,则根据所述数据包生成缓存匹配项,并将所述缓存匹配项存入所述转发端口对应的缓存表内。
在一个实施例中,所述基于所述转发端口将所述数据包转发至缓存机构,包括:
若数据包类型不为复制数据包,确定所述数据包对应的数据信息;
若缓存机构包括所述数据信息对应的数据块,则将所述数据包关联至所述数据块;
若缓存机构未包括所述数据信息对应的数据块,则建立所述数据信息对应的数据块,并将所述数据包与所述数据块相关联。
在一个实施例中,所述方法还包括:
所述缓存机构周期性检测缓冲区利用率;
当缓冲区利用率小于缓存因子时,缓存机构根据其自身存储的各数据块的数据包的期望时间片选取若干数据包,并将选取到的若干数据包注入缓冲区内。
在一个实施例中,所述缓存机构基于时间片维护其自身存储的数据包,所述方法包括:
缓存机构每间隔所述时间片检测其自身存储的所有数据包中所有期望时间超时的超时数据包;
将选取到的所有超时数据包括注入缓冲区内。
在一个实施例中,所述将选取到的所有超时数据包括注入缓冲区内具体包括:
缓存机构检测是否正在执行向缓冲区内注入数据包的操作;
若未执行向缓冲区内注入数据包的操作,则将选取到的所有超时数据包括注入缓冲区内;
若执行向缓冲区内注入数据包的操作,则停止正在执行的向缓冲区内注入数据包的 操作,并将选取到的所有超时数据包括注入缓冲区内。
本发明实施例第二方面提供了一种转存设备,其中,所述转存设备配置有缓存机构;所述转存设备用于执行如上任一所述的基于网内缓存的差异化传输方法中的步骤。
有益效果:与现有技术相比,本发明提供了一种基于网内缓存的差异化传输方法,所述方法包括当转发设备接收到数据包时,根据所述数据包对应的转发端口确定缓存表;若所述数据包匹配所述缓存表,则基于所述转发端口将所述数据包转发至缓存机构,其中,所述缓存机构配置于所述转发设备内;若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包。本发明通过在转发设备内设置转发机构,实现了传输数据的网内缓存,并且通过缓存表来对缓冲区和缓存设备进行协调,实现了差异化的数量传输,并克服上下游传输失配问题。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员而言,在不符创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明提供的基于网内缓存的差异化传输方法的流程图。
图2为本发明提供的基于网内缓存的差异化传输方法的一个流程示例图示意图。
图3为本发明提供的转发设备中的网内缓存架构原理图。
图4为本发明提供的转发设备中的缓存机构的数据包的存储关系图。
图5为本发明提供的转发设备中的缓存机构对缓存数据的处理过程的流程图。
具体实施方式
本发明提供一种基于网内缓存的差异化传输方法,为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当 我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
下面结合附图,通过对实施例的描述,对发明内容作进一步说明。
本实施例提供了一种基于网内缓存的差异化传输方法,如图1和图2所示,所述方法包括:
S10、当转发设备接收到数据包时,根据所述数据包对应的转发端口确定缓存表。
具体地,所述转发设备为用于数据传输的终端设备,所述转发设备中,构建转发设备基于差异化传输需求的网内缓存架构,在转发设备中配置缓存机构,通过所述缓存机构实现cache功能;同时,所述转发设备内设置有缓存表,并通过缓存表对数据缓存管理。通过缓存表对数据进行缓存管理,以确定数据包缓存至缓存机构,还是发送至缓冲区。
所述转发端口为所述转发设备配置的转发端口,通过所述转发端口可以将数据包发送至缓冲区。可以理解的是,转发设备配置若干转发端口,若干转发端口中各转发端口均用于转发数据包,并且若干转发端口中每个转发端口对应一个缓存表,且各转换端口对应的缓存表不同,即转发端与缓存表为一一对应。基于此,在接收到数据包,可以根据该数据包对应的转发端口确定该数据对应的缓存表。
所述缓存表包括若干缓存表项,每个缓存表项包括包头信息以及转发端口,所述包头信息用于匹配数据包,转发端口用于确定与包头信息匹配的数据对应的转发端口。可以理解的是,在接收到数据包后,将数据包的包头信息与缓存表中的各缓存表项中的包头信息进行匹配,以确定数据包与所述缓存表是否匹配。其中,当缓存表中存在缓存表项的包头信息与数据包的包头信息匹配时,说明数据包与缓存表匹配;当缓存表中未存在缓存表项的包头信息与数据包的包头信息匹配时,说明数据包与缓存表不匹配;此外,当数据包与缓存表匹配时,将匹配的缓存表项携带的转发端口作为该数据包对应的转发 端口。
进一步,在本实施例的一个实现方式中,在接收到数据包之后,需要该数据包是否为通过转发设备转发的数据包。基于此,所述转发设备中存储有转发表,所述转发表中存储有可以通过该转发设备转发的数据包。转发设备在接收到数据包,解析该数据包以获取该数据包的包头数据,并在转发表中匹配该包头数据,若该包头数据匹配成功,则根据所述数据包对应的转发端口确定缓存表;若该包头数据匹配失败,则丢弃所述数据包。
进一步,在本实施例的一个实现方式中,所述数据包携带有优先级,所述优先级用于反映数据包被转发的优先程度,并且数据包对应的优先级越高,数据包被转发的优先程度越高。例如,数据包A对应的优先级为第一优先级,数据包B对应的优先级为第二优先级,第一优先级高于第二优先级,从而数据包A被转发的优先程度高于数据包B被转发的优先程度。
所述优先级可以为端侧服务器为所述数据包配置,其中,所述优先级为基于数据包的时延时长划分,并且每个优先级对于一个时延时间段;这样可以根据数据包的时延时长,确定数据包括所处的时延时间段,并将该时延时间段对应的优先级作为该数据包对应的优先级;在获取到数据包对应的优先级后,将优先级对应的优先级标签配置于所述数据包,这样转发设备在获取到数据包后,可以根据数据包携带的优先级标识确定其对应的优先级,从而可以基于优先级确定数据包对应的转发端口。
此外,由于所述优先级为基于数据包的时延时长确定,从而在将优先级配置于所述数据包内时,可以将该数据包对应的时延时长配置于数据包内,这样转发设备在获取到数据包后,可以获取该数据包对应的时延时长,以便于在该时延时长之内转发该数据包,避免因数据包超时而导致的超时重传问题。可以理解的是,转发设备接收到的数据包携带优先级以及时延时长,并且转发设备可以基于该优先级确定数据包对应的转发端口,以及基于延时需求确实数据包需要转发的最晚时间,这样转发设备可以基于优先级以及延时需求对数据包进行调度,可以提高转发设备的传输能力。例如,各优先级对应的数据包设置不同的TCP协议栈RTO值,通过RTO值表示数据包的时延,并且各优先级对应的RTO值不同,例如,100ms、200ms、1s和2s等。
在一个具体实现方式中,端侧服务器根据时延需要划分四个优先级,并且记为第一优先级、第二优先级、第三优先级以及第四优先级,其中,第一优先级高于第二优先级,第二优先级高于第三优先级,第三优先级高于第四优先级,并且各优先级由高到低对应 的时延时长依次增大,例如,第一优先级对应的时延时长小于第二优先级对应的时延时长。此外,为了使得转发设备可以读取到数据包对应的优先级,所述优先级可以标记于数据包的DSCP域(例如,选取DSCP域的4bit作为优先级存储位),以得到DSCP域携带优先级。例如,将第一优先级的优先级标记为0x01,将第二优先级的优先级标记为0x02,将第三优先级的优先级标记为0x03,将第四优先级的优先级标记为0x04。
此外,端侧服务器在配置网卡数据包发送过程中,采用若干优先级队列进行数据发送,端侧服务器在网卡发送过程中维护若干优先级队列,并采用权重轮询方式进行队列调度。其中,若干优先级队列中每个优先级队列对应一个优先级,各优先级队列对应的优先级互不相同;每个优先级队列配置有队列权重,各队列权重均为0-1之间的数值,且各队列权重之和为1。例如,第一优先级对应0.5,第二优先级对应0.25,第三优先级对应0.15,第四优先级对应0.1。
进一步,由于数据包携带有优先级,从而在接收到数据包可以基于该数据包携带优先级确定数据对应的转发端口。相应的,在本实施例的一个实现方式中,所述当转发设备接收到的数据包时,根据所述数据包对应的转发端口确定缓存表具体包括:
S11、当转发设备接收到数据包时,获取所述数据包对应的优先级;
S12、若所述优先级满足预设条件,则根据所述数据包对应的转发端口确定缓存表;
S13、若所述优先级未满足预设条件,则将所述数据包导入缓冲区内,以转发所述数据包。
具体地,所述预设条件为预先配置,用于横梁可以被缓存的数据包的依据。当优先级满足预设条件时,说明该优先级对应的数据包可以被缓存,当优先级未满足预设条件是,说明优先级对应的数据不可以被缓存,需要直接被转发。所述预设条件可以为优先级阈值,当优先级高于优先级阈值时,优先级未满足预设条件;当优先级低于或等于优先级阈值时,优先级满足条件。这是由于优先级从高到低对应的时延时长逐步增大,对于高优先级的数据包,数据包对应的时延时长小,此时需要直接转发,以避免因缓存而造成的超声重发问题;对于优先级低的数据包,数据包对应的时延时长大,此时可以将数据包存入缓存机构,这样可以实现差异化传输,提高协议栈的传输能力。
举例说明:假设数据包对应四个优先级,按照优先级从高到低的顺序依次记为第一优先级、第二优先级、第三优先级以及第四优先级;所述预设条件可以为第三优先级;对于第一优先级和第二优先级,由于优先级高于第三优先级,从而第一优先级和第二优先级不满足预设条件;对于第三优先级和第四优先级,由于第三优先级和第四优先级小 于或等于所述第三优先级,从而第三优先级和第四优先级满足预设条件。也就是说,第一优先级的数据包和第二优先级的数据包直接导入缓冲区内,以转发所述数据包;第三优先级的数据包和第四优先级的数据包根据述数据包对应的转发端口确定缓存表。
进一步,转发设备采用优先级队列的形式管理接收到数据包,并将各数据包,采用权重轮询方式进行优先级队列调度,并且每个优先级队列配置有调度权重,例如,转发设备配置多优先级队列,设置一个绝对优先级队列服务无优先级标签的数据包,设置4个相对优先级队列服务4个优先级数据,队列调度采用权重轮询方式,每个优先级队列配置有队列权重,各队列权重均为0-1之间的数值,且各队列权重之和为1。例如,优先级队列权重从高到低分别为0.5、0.25、0.15和0.1。此外,所述转发设备配置有一个默认优先级队列,该默认优先级队列用于调度未携带优先级的数据包(例如,OSPF和RIP等转发设备之间进行通信数据包),并且为默认优先级队列对应的优先级高于任一携带优先级的数据包。
S20、若所述数据包匹配所述缓存表,则基于所述转发端口将所述数据包转发至缓存机构,其中,所述缓存机构配置于所述转发设备内。
具体地,所述缓存机构为转发设备预先配置,用于缓存转发设备接收到部分数据包,并对其存储的数据包进行调度。例如,所述缓存机构可以为NetFPGA,所述NetFPGA通过转发设备扩展接口安装在转发设备中,并且NetFPGA可以作为具有调度功能的cache资源。此外,所述缓存机构对数据包以数据块结构进行调度,并且缓存机构周期性地(时间片粒度)为维护的每个数据块生成数据块,并将其与相应数据块进行关联,同时在每个时间片开始阶段,将数据块中超时的期望时间片减1。
在本实施例的一个实现方式中,所述基于所述转发端口将所述数据包转发至缓存机构具体包括:
获取所述数据包对应的数据包类型;
若数据包类型为复制数据包,则根据所述数据包生成缓存匹配项,并将所述缓存匹配项存入所述转发端口对应的缓存表内;
若数据包类型不为复制数据包,确定所述数据包对应的数据信息;
当缓存机构包括所述数据信息对应的数据块时,将所述数据包关联至所述数据块;
当缓存机构未包括所述数据信息对应的数据块时,建立所述数据信息对应的数据块,并将所述数据包与所述数据块相关联。
具体地,所述数据包类型用于反映所述数据包的用途;所述数据包类型包括复制数据包和非复制数据包,当所述数据包类型为复制数据包时,说明该数据包用于在缓存表内建立该数据包对应的缓存表项;当所述数据包类型为非复制数据包时,说明该数据包用于转存至缓存机构。此外,所述数据包类型可以基于数据包预先配置的数据包类型来确定,例如,数据包的DSCP中配置4bit用于表示数据类型,当该4bit域最高位为1时,提取该数据包包头信息(即:源目的IP、源目的端口号和协议号),生成缓存匹配项,并将该缓存匹配项插入对应转发端口的缓存表;当该4bit域最高位为0时,根据优先级对应的4bit域提取该数据的优先级(例如,0x03),并提取该数据包包头信息(即:源目的IP、源目的端口号和协议号),得到包头数据;转存机构包头数据,在本地缓存的块数据中进行匹配查找,如果找到对应数据块,则将该数据包关联到该数据块最新的数据片中,并更新相应数据块和数据片中的统计信息;如果没有找到对应数据块,转存机构生成该数据包对应的数据块并生成新的数据片。
S30、若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包。
具体地,所述缓冲区记为buffer,buffer配置有缓存因子和注入因子,其中,所述缓存因子用于反映buffer的利用率,注入因子用于反映注入数据占用buffer的占用率。所述缓存因子为用来横梁buffer状态的依据;当缓存因子大于或者等于缓存因子时,说明buffer处于饱和状态,当缓存因子小于缓存因子时,说明buffer处于未饱和状态。其中,当buffer处于饱和状态时,需要对数据包进行缓存;当buffer处于未饱和状态时,可以直接采用所述buffer进行转发。
基于此,在所述数据包未匹配所述缓存表时,需要确定buffer对应的缓存表的缓存标识,所述缓存标识用于反映buffer所处状态。相应的,所述若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包具体包括:
若所述数据包未匹配所述缓存表,获取所述缓存表的缓存标识;
当所述缓存标识为第一缓存标识时,将所述数据包的复制数据包转发至缓存机构,并将所述数据包导入缓冲区内,以转发所述传输数据;
当所述缓存标识为第二缓存标识时,将所述数据包导入缓冲区内,以转发所述数据包。
具体地,所述缓存表的缓存标识为根据缓冲区利用率确定的,其中,第一缓存标识表示buffer利用大于或等于缓存因子,第二缓存标识表示缓冲区利用率小于缓存因子。 所述缓存表的缓存标识为基于转发端口对应的buffer的利用率确定,所述缓存标识的确定过程可以为:缓存机构周期性地检测各转发端口缓冲区利用率。如果利用率超过缓存因子,则将对应缓存表的缓存标识设置为1;反之,如果利用率为超过缓存因子,则将对应缓存表的缓存标识设置为0。
进一步,当缓存表的缓存标识为第二缓存标识时,直接将数据包导入端口buffer优先级为3的队列中,等待转发。当缓存表的缓存标识为第一缓存标识时,复制该数据包,并将其4bit的DSCP域最高为设置为1,即标记成0x0b,发送给缓存机构,同时将原始数据包放入相应转发端口以等待转发。
进一步,在本实施例的一个实现方式中,所述方法还包括:
所述缓存机构周期性检测缓冲区利用率;
当缓冲区利用率小于缓存因子时,缓存机构根据其自身存储的各数据块的数据包的期望时间片选取若干数据包,并将选取到的若干数据包注入缓冲区内。
具体地,所述缓存机构利用指针数组维护所有数据块的数据包;当缓冲区利用率小于缓存因子时,则根据维护的所有数据块的数据包的期望超时时间片的升序,顺序地选择相应的数据包,并将选取到的若干数据包注入缓冲区内,其中,若干数据包对应的最大数据量为大buffer空间的0.6倍,若干数据包对应的注入速度为buffer和cache之间最大传输速度/端口数量。
进一步,在本实施例的一个实现方式中,所述缓存机构基于时间片维护其自身存储的数据包,所述方法包括:
缓存机构每间隔所述时间片检测其自身存储的所有数据包中所有期望时间超时的超时数据包;
缓存机构检测是否正在执行向缓冲区内注入数据包的操作;
若未执行向缓冲区内注入数据包的操作,则将选取到的所有超时数据包括注入缓冲区内;
若执行向缓冲区内注入数据包的操作,则停止正在执行的向缓冲区内注入数据包的操作,并将选取到的所有超时数据包括注入缓冲区内。
具体地,缓存机构每间隔所述时间片检测其自身存储的所有数据包中所有期望时间超时的超时数据包为在缓存机构中生成用于存储数据包的数据片时,缓存机构检测其自身存储的所有数据片期望超时的期望时间片,选出所有超时的期望时间片为0的数据片,将数据片根据其对应数据块的优先级值进行升序排序,以给定的注入速度(例如,buffer 和cache之间最大传输速度/端口数量)顺序地将数据进行发送到buffer中。
此外,在将超时数据包注入缓冲区时,检测否正在执行向缓冲区内注入数据包的操作,若执行向缓冲区内注入数据包的操作,则停止正在执行的向缓冲区内注入数据包的操作,并将选取到的所有超时数据包括注入缓冲区内,并本时间片内完成超时数据注入,则继续执行停止的向缓冲区内注入数据包的操作,若干本时间片内未完成超时数据注入,丢弃未完成注入的超时数据,并本时间片内不继续执行停止的向缓冲区内注入数据包的操作程,重新开始下一时间片的注入策略。当然,在实际应用中,当缓存中的一数据块内的所有缓存数据的注入后,在应缓存表中删除对应缓存表项,通道将从缓存机构中导入缓冲区时,将导入数据包的DSCP的4bit域设置为0x05。
基于上述基于网内缓存的差异化传输方法,本实施提供了一种转发设备,所述转发设备用于执行上述实施例所述的基于网内缓存的差异化传输方法应用与转发设备。如图3所示,所述转发设备配置有缓存机构,并且所述转发设备中存储有转发表以及若干缓存表,转发表中存储有数据包的流量信息flow,以及数据包对应的转发端口port;若干缓存表与转发设备配置若干转发端口相对应,每个缓存表中均存储有数据包的流量信息flow。其中,如图4所示,所述缓存机构配置若干数据块,数据块记录有数据块的ID,数据块的优先级、数据块到达的时间片,现有数据片的数量,数据块中的中数据量;每个数据块中包括若干数据片,每个数据片存储有开始时间片、超声的期望时间片、数据包数据以及数据总量的数据信息,同时每个数据片内存储有若干数据包。
所述转发设备配置有缓存策略,所述缓存策略包括缓存数据管理策略、从buffer传输数据到cache(缓存机构)的缓存策略、端口低负载时从cache(缓存机构)传输数据到buffer的注入策略以及缓存数据超时时从cache(缓存机构)传输数据到buffer的注入策略。
如图5所示,所述缓存数据管理策略为:缓存机构接收到数据包时,提取数据包携带的流量信息(数据包信息),检测缓存机构是否存在该流量信息对应的数据块;若存在则将数据包存储于该数据块中最新的数据片中;若未存在则为该数据包生成数据块,并添加一个数据片,将该数据包存储于该数据片内。
从buffer传输数据到cache(缓存机构)的缓存策略具体包括:预先设置转发端口的缓存因子,并将缓冲区利用率与其对应的转发因子进行比较以更新各转发端口缓存标识;数据包匹配缓存表,如果匹配成功,则判断数据包的优先级是否属于可缓存的优先级,如果是将该数据包对应的数据包信息插入缓存表;否则将数据包放入端口buffer中; 若匹配失败,则将数据包放入端口buffer中,并继续执行下一数据包匹配。
转发端口低负载时从cache(缓存机构)传输数据到buffer的注入策略具体包括:预先设置转发端口的缓存因子,并将缓冲区利用率与其对应的转发因子进行比较以更新各转发端口缓存标识;如果转发端口的缓存标识被重置(设置为0);检测cache(缓存机构)中该转发端口对应的缓存数据,如果存在缓存数据,则根据期望的buffer增长情况、cache中缓存数据信息以及cache和buffer之间的传输能力计算期望注入数据量,根据期望buffer状态的安全利用率约束计算注入速度,并将cache中数据片根据期望的超时时间片进行升序排序,在升序的时间片序列中收集最大不超过期望注入数据量的数据包,依据计算的注入速率,将选择的cache中的注入数据包导入缓冲区。
缓存数据超时时从cache(缓存机构)传输数据到buffer的注入策略具体包括:转发设备根据时间片粒度维护超时事件队列;每个时间片开始时,转发设备检测超时事件队列,如果存在超时事件,则收集所有超时事件对应的数据片;将数据片根据对应流量优先级进行降序排序;利用cache和buffer之间最大可行传输带宽顺序传输收集的数据片数据;如果该时间片结束时存在剩余数据未被传输,则直接丢弃。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种基于网内缓存的差异化传输方法,其特征在于,所述方法包括:
    当转发设备接收到数据包时,根据所述数据包对应的转发端口确定缓存表;
    若所述数据包匹配所述缓存表,则基于所述转发端口将所述数据包转发至缓存机构,其中,所述缓存机构配置于所述转发设备内;
    若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包。
  2. 根据权利要求1所述基于网内缓存的差异化传输方法,其特征在于,所述当转发设备接收到的数据包时,根据所述数据包对应的转发端口确定缓存表具体包括:
    当转发设备接收到数据包时,获取所述数据包对应的优先级;
    若所述优先级满足预设条件,则根据所述数据包对应的转发端口确定缓存表;
    若所述优先级未满足预设条件,则将所述数据包导入缓冲区内,以转发所述数据包。
  3. 根据权利要求1所述基于网内缓存的差异化传输方法,其特征在于,所述若所述数据包未匹配所述缓存表,则将所述数据包导入缓冲区内,以转发所述数据包具体包括:
    若所述数据包未匹配所述缓存表,获取所述缓存表的缓存标识;
    当所述缓存标识为第一缓存标识时,将所述数据包的复制数据包转发至缓存机构,并将所述数据包导入缓冲区内,以转发所述传输数据;
    当所述缓存标识为第二缓存标识时,将所述数据包导入缓冲区内,以转发所述数据包。
  4. 根据权利要求3所述基于网内缓存的差异化传输方法,其特征在于,所述缓存表的缓存标识为根据缓冲区利用率确定的,其中,第一缓存标识表示缓冲区利用率大于或等于缓存因子,第二缓存标识表示缓冲区利用率小于缓存因子。
  5. 根据权利要求1所述基于网内缓存的差异化传输方法,其特征在于,所述基于所述转发端口将所述数据包转发至缓存机构具体包括:
    获取所述数据包对应的数据包类型;
    若数据包类型为复制数据包,则根据所述数据包生成缓存匹配项,并将所述缓存匹配项存入所述转发端口对应的缓存表内。
  6. 根据权利要求5所述基于网内缓存的差异化传输方法,其特征在于,所述基于所述转发端口将所述数据包转发至缓存机构,包括:
    若数据包类型不为复制数据包,确定所述数据包对应的数据信息;
    若缓存机构包括所述数据信息对应的数据块,则将所述数据包关联至所述数据块;
    若缓存机构未包括所述数据信息对应的数据块,则建立所述数据信息对应的数据块,并将所述数据包与所述数据块相关联。
  7. 根据权利要求1所述基于网内缓存的差异化传输方法,其特征在于,所述方法还包括:
    所述缓存机构周期性检测缓冲区利用率;
    当缓冲区利用率小于缓存因子时,缓存机构根据其自身存储的各数据块的数据包的期望时间片选取若干数据包,并将选取到的若干数据包注入缓冲区内。
  8. 根据权利要求1或7所述基于网内缓存的差异化传输方法,其特征在于,所述缓存机构基于时间片维护其自身存储的数据包,所述方法包括:
    缓存机构每间隔所述时间片检测其自身存储的所有数据包中所有期望时间超时的超时数据包;
    将选取到的所有超时数据包括注入缓冲区内。
  9. 根据权利要求8所述基于网内缓存的差异化传输方法,其特征在于,所述将选取到的所有超时数据包括注入缓冲区内具体包括:
    缓存机构检测是否正在执行向缓冲区内注入数据包的操作;
    若未执行向缓冲区内注入数据包的操作,则将选取到的所有超时数据包括注入缓冲区内;
    若执行向缓冲区内注入数据包的操作,则停止正在执行的向缓冲区内注入数据包的操作,并将选取到的所有超时数据包括注入缓冲区内。
  10. 一种转存设备,其特征在于,其包括:所述转存设备配置有缓存机构;所述转存设备用于执行如权利要求1-9任意一项所述的基于网内缓存的差异化传输方法中的步骤。
PCT/CN2021/094892 2020-05-28 2021-05-20 一种基于网内缓存的差异化传输方法 WO2021238764A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010470454.0A CN111770027B (zh) 2020-05-28 2020-05-28 一种基于网内缓存的差异化传输方法
CN202010470454.0 2020-05-28

Publications (1)

Publication Number Publication Date
WO2021238764A1 true WO2021238764A1 (zh) 2021-12-02

Family

ID=72719595

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094892 WO2021238764A1 (zh) 2020-05-28 2021-05-20 一种基于网内缓存的差异化传输方法

Country Status (2)

Country Link
CN (1) CN111770027B (zh)
WO (1) WO2021238764A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710453A (zh) * 2022-03-16 2022-07-05 深圳市风云实业有限公司 一种高宽带低延时存储转发控制装置及其控制方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770027B (zh) * 2020-05-28 2022-03-08 南方科技大学 一种基于网内缓存的差异化传输方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107634915A (zh) * 2017-08-25 2018-01-26 中国科学院计算机网络信息中心 数据传输方法、装置及储存介质
CN110708260A (zh) * 2019-11-13 2020-01-17 鹏城实验室 数据包传输方法及相关装置
CN110891023A (zh) * 2019-10-31 2020-03-17 上海赫千电子科技有限公司 一种基于优先级策略的信号路由转换方法及装置
CN111770027A (zh) * 2020-05-28 2020-10-13 南方科技大学 一种基于网内缓存的差异化传输方法
CN112580755A (zh) * 2019-09-30 2021-03-30 菜鸟智能物流控股有限公司 数据传输方法、装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107634915A (zh) * 2017-08-25 2018-01-26 中国科学院计算机网络信息中心 数据传输方法、装置及储存介质
CN112580755A (zh) * 2019-09-30 2021-03-30 菜鸟智能物流控股有限公司 数据传输方法、装置、电子设备和存储介质
CN110891023A (zh) * 2019-10-31 2020-03-17 上海赫千电子科技有限公司 一种基于优先级策略的信号路由转换方法及装置
CN110708260A (zh) * 2019-11-13 2020-01-17 鹏城实验室 数据包传输方法及相关装置
CN111770027A (zh) * 2020-05-28 2020-10-13 南方科技大学 一种基于网内缓存的差异化传输方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHI WANXIN; LI QING; WANG CHAO; SHEN GENGBIAO; LI WEICHAO; WU YU; JIANG YONG: "LEAP: Learning-Based Smart Edge with Caching and Prefetching for Adaptive Video Streaming", 2019 IEEE/ACM 27TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 16 April 2020 (2020-04-16), pages 1 - 10, XP033757506, DOI: 10.1145/3326285.3329051 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710453A (zh) * 2022-03-16 2022-07-05 深圳市风云实业有限公司 一种高宽带低延时存储转发控制装置及其控制方法
CN114710453B (zh) * 2022-03-16 2023-10-10 深圳市风云实业有限公司 一种高宽带低延时存储转发控制装置及其控制方法

Also Published As

Publication number Publication date
CN111770027A (zh) 2020-10-13
CN111770027B (zh) 2022-03-08

Similar Documents

Publication Publication Date Title
KR102317523B1 (ko) 패킷 제어 방법 및 네트워크 기기
Mittal et al. Revisiting network support for RDMA
WO2021238764A1 (zh) 一种基于网内缓存的差异化传输方法
CN109120544B (zh) 一种数据中心网络中基于主机端流量调度的传输控制方法
US6839767B1 (en) Admission control for aggregate data flows based on a threshold adjusted according to the frequency of traffic congestion notification
TWI220832B (en) A scheme to prevent HFN un-synchronization for UM RLC in a high speed wireless communication system
CN101843157B (zh) 基于无线电承载配置的缓冲器状态报告
US20220303217A1 (en) Data Forwarding Method, Data Buffering Method, Apparatus, and Related Device
CN109714267A (zh) 管理反向队列的传输控制方法及系统
WO2013016971A1 (zh) 一种分组交换网中数据包发送和接收的方法及装置
CN107948103A (zh) 一种基于预测的交换机pfc控制方法及控制系统
CN110868359B (zh) 一种网络拥塞控制方法
US20030137935A1 (en) Static flow rate control
CN114500394B (zh) 一种区分服务的拥塞控制方法
CN114124826B (zh) 拥塞位置可感知的低时延数据中心网络传输系统及方法
CN104219170B (zh) 无线网络中基于概率重传的包调度方法
CN101150354B (zh) 消息调度方法和消息调度装置
US20120254483A1 (en) Data transmission method, device and system
CN116980342A (zh) 一种多链路聚合传输数据的方法和系统
CN101188555B (zh) 一种提高非可靠通讯环境下单向通讯可靠性的方法
Yuan et al. Breaking one-rtt barrier: Ultra-precise and efficient congestion control in datacenter networks
WO2022073487A1 (zh) 数据传输方法、装置及存储介质
CN109067663A (zh) 一种针对应用程序内控制请求响应速率的系统和方法
CN115442000A (zh) 一种适用于低带宽、易损耗自组网络下的传输保障方法
WO2022007829A1 (zh) 数据的传输方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812529

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21812529

Country of ref document: EP

Kind code of ref document: A1