WO2018171115A1 - 一种分片的服务质量保证方法及现场可编程逻辑门阵列 - Google Patents

一种分片的服务质量保证方法及现场可编程逻辑门阵列 Download PDF

Info

Publication number
WO2018171115A1
WO2018171115A1 PCT/CN2017/098322 CN2017098322W WO2018171115A1 WO 2018171115 A1 WO2018171115 A1 WO 2018171115A1 CN 2017098322 W CN2017098322 W CN 2017098322W WO 2018171115 A1 WO2018171115 A1 WO 2018171115A1
Authority
WO
WIPO (PCT)
Prior art keywords
priority
fragment
cache
threshold
space
Prior art date
Application number
PCT/CN2017/098322
Other languages
English (en)
French (fr)
Inventor
李建宇
皋华敏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018171115A1 publication Critical patent/WO2018171115A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames

Definitions

  • the present application relates to, but is not limited to, the field of broadband access data communication, and more particularly to a fragmented quality of service guarantee method and a field programmable logic gate array.
  • DSL Digital Subscriber Line
  • the architecture of a typical DSL access device is shown in Figure 1.
  • the DSL device consists of a main control board and a plurality of line cards.
  • the line card has a DSL chip and an FPGA (Field Programmable Gate Array), and the main control board has a switching chip.
  • the DSL chip cuts the Ethernet packet in a fixed size to form the original fragment, and then adds it to the original fragment of these cuts according to the requirements of the ITU-T G.999.1 standard.
  • GINT header (ITU-TG.999.1 is usually referred to as G.INT standard, and the fragment defined in the G.INT standard is simply referred to as GINT fragment; G is the G series standard in the ITU standard, INT is the abbreviation of English Interface), forming GINT shards.
  • the FPGA After receiving the GINT fragment from each port of the DSL chip, the FPGA stores and aggregates the chip and sends it to the switching chip of the main control board through the inline port.
  • the switch chip reassembles the GINT fragments from each DSL port into Ethernet packets, and processes them before sending them out through the uplink port.
  • the GINT fragments transmitted on the DSL port are divided into three different priorities.
  • the H_GINT fragment with the highest priority cannot be discarded; the M_GINT fragment with medium priority is not discarded as much as possible; the L_GINT fragment with low priority can be discarded.
  • the effective bandwidth of the line card internal port is usually smaller than the total bandwidth of each DSL port. In this way, congestion will form at the exit of the inline port, causing loss of GINT fragments.
  • the medium priority service M_GINT fragment is not lost as much as possible, and the quality of service (QoS) needs to be implemented on the FPGA chip.
  • FIG. 2 is based on FIG. 1 and implements QoS on an FPGA chip.
  • the FPGA chip includes an interface processing module and a slice parsing classification module.
  • Block, fragment descriptor queue, queue scheduling module, fragment transmission module, the general QoS method is export QoS, the idea is that the interface processing module receives the fragment sent from each port, and the fragmentation analysis classification module is based on the fragmentation.
  • the priority information will be parsed and classified, and three queues will be set before the outbound interface, and the three queues will be set to high, medium and low priority respectively, corresponding to the three priorities of H, M and L.
  • Level GINT shards The queue scheduling module behind the queue dispatches the GINT fragments in the queue according to the priority, and sends them to the inline port through the fragment sending module of the latter stage.
  • the queue scheduling module When scheduling, the queue scheduling module first dispatches the H_GINT fragment of the high priority queue, then schedules the M_GINT fragment in the priority queue, and finally schedules the L_GINT fragment in the low priority queue.
  • Each DSL port has high, medium and low priority Ethernet packets, and is cut and packaged into fixed length H_GINT, M_GINT, L_GINT fragments in the DSL chip.
  • the GINT fragment can be divided into a first fragment, an intermediate fragment and a tail fragment by the identifier of the GINT header, which respectively correspond to the header, the middle and the tail of the Ethernet packet. Meanwhile, the GINT header also identifies the DSL port number. After the scheduled GINT fragments enter the switch chip of the main control board from the inline port, they are reassembled into Ethernet packets according to the ports.
  • the queue scheduling module then starts scheduling the H_GINT fragments and sends them to the reassembly module of the switching chip for reassembly.
  • the reassembly module of the switch chip receives the first L_GINT fragment, the intermediate L_GINT fragment, the first H_GINT fragment, the intermediate H_GINT fragment, and the tail H_GINT fragment.
  • the reorganization module When the reorganization module is reorganized, it will All shards between the first shard and the last shard are reassembled into one Ethernet packet. Since these first and last fragments are from different Ethernet packets, the final reorganization will fail, causing the high priority Ethernet packets to be discarded. Therefore, using the traditional export QoS method for processing, there will be cases where GINT fragments are out of order, wrong packets, lost packets, and cannot be used on the DSL device in the master switching chip.
  • the embodiment of the invention provides a fragmented service quality assurance method and a field programmable logic gate array, so as to avoid the situation that the master switching chip has fragmentation, wrong packet, packet loss, and cannot be used on the DSL device. .
  • An embodiment of the present invention provides a method for guaranteeing quality of a fragment, including:
  • the slice is cut into an Ethernet packet from at least one digital subscriber line port;
  • the fragment is stored in the buffer space, and it is determined according to a preset rule whether to update the cache threshold of the priority;
  • the to-be-sent fragment in the cache space is taken out according to the first-in-first-out rule and is sent out.
  • Embodiments of the present invention provide a field programmable logic gate array, including:
  • a receiving module configured to receive a fragment, the fragment being cut into an Ethernet packet from at least one digital subscriber line port
  • the parsing module is configured to parse the priority information of the fragment from the fragment;
  • the first determining module is configured to determine whether to store the fragment in the cache according to the priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority. In space
  • the storage module is configured to: if the first determining module determines that the fragment needs to be stored in the buffer space, the fragment is stored in the buffer space;
  • the second determining module is configured to determine, according to the preset rule, whether the cache threshold of the priority is Update;
  • the outgoing module is configured to take out the to-be-sent fragment in the cache space according to the first-in-first-out rule and send it out.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the foregoing service quality assurance method of the fragment.
  • a fragmented quality of service guarantee method and a field programmable logic gate array are formed by receiving a fragment, and the fragment is cut into an Ethernet packet from at least one digital subscriber line port; Determining the priority information of the fragment in the slice; determining whether to use the priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority The fragment is stored in the buffer space; if yes, the fragment is stored in the buffer space, and whether the priority cache threshold is updated according to a preset rule; if yes, the priority cache threshold The update is performed; the shards to be sent in the cache space are taken out according to the first-in-first-out rule, and are sent out; when the DSL device is congested in the uplink data channel, the DSL device ensures that the high-priority fragments are not lost.
  • the medium-priority fragments are not lost as much as possible, and the low-priority fragments can be lost to ensure the quality of
  • 1 is a schematic diagram of a DSL broadband access device
  • FIG. 2 is a schematic diagram of a broadband access device for exporting QoS
  • FIG. 3 is a flowchart of a method for guaranteeing quality of a slice according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic diagram of buffer space division according to Embodiments 1 and 2 of the present invention.
  • FIG. 5 is a schematic diagram of an FPGA according to Embodiment 2 of the present invention.
  • FIG. 3 is a flowchart of a method for guaranteeing quality of a fragment according to an embodiment, where the method includes the following steps:
  • S301 Receive a fragment, where the fragment is cut by an Ethernet packet from at least one DSL port;
  • FIG. 1 is a structural diagram of a DSL broadband access device according to an embodiment of the present invention
  • a DSL broadband device is composed of a main control board and a plurality of line cards, and the line card and the main control board are connected through an inline port.
  • the FPGA aggregates GINT fragments from multiple DSL ports and sends them to the switch chip of the main control board through the inline port for reassembly.
  • the fragment in S301 refers to the GINT fragment.
  • the source of the GINT fragment is the uplink direction from the DSL port to the uplink port.
  • the DSL chip cuts the Ethernet packet by a fixed size to form the original fragment, and then according to the ITU- The G.999.1 standard requires the addition of a GINT head to the outside of these cut original segments, thereby forming a GINT slice.
  • Each DSL port can be polled to receive GINT fragments from each DSL port in turn.
  • the GINT fragments transmitted on the DSL port are divided into three different priorities, including a high priority fragment H_GINT, a medium priority fragment M_GINT, and a low priority fragment L_GINT; the highest priority H_GINT fragment can not be discarded, medium priority service The M_GINT fragment is not discarded as much as possible, and the L_GINT fragment with low priority can be discarded.
  • Each DSL port has high, medium and low priority Ethernet packets, and is cut and packaged into fixed length H_GINT, M_GINT, L_GINT fragments in the DSL chip; GINT fragments can be divided by the GINT header identifier. For the first fragment, the intermediate fragment and the tail fragment, respectively corresponding to the head, middle and tail of the Ethernet packet; meanwhile, the GINT header also identifies the DSL port number; that is, a GINT fragment contains the priority of the fragment.
  • Level information priority information includes high priority, medium priority, low priority
  • fragment type information includes first fragment, intermediate fragment, tail fragment
  • DSL port number information includes high, medium and low priority
  • S303 Determine, according to the priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority, whether the fragment is stored in the buffer space; if yes, Then enter S304, and if not, proceed to S308;
  • the cache space includes a high-priority exclusive cache space and a shared cache space.
  • the high-priority exclusive cache space is used to store high-priority fragments
  • the shared cache space is used to store each priority fragment;
  • the shared cache space can be Storing high priority fragment H_GINT, medium priority fragment M_GINT, and low priority fragment L_GINT;
  • the high-priority exclusive cache space is preferentially stored.
  • the high-priority fragment is stored in the shared cache space.
  • FIG. 4 is a schematic diagram of buffer space partitioning according to an embodiment of the present invention, wherein the size of the cache space of the high priority fragment is H_TH_MIN, and the size of the shared buffer space is SHARE_TH; defining H_GINT, M_GINT, L_GINT The buffer thresholds of the fragments are H-TH, M-TH, L-TH, and the sum of H-TH, M-TH, and L-TH is equal to SHARE_TH.
  • the GINT fragment is stored in the cache, and the value of the occupied buffer space is increased correspondingly; otherwise, the value is increased.
  • the fragment is discarded, and the non-first fragment received after the port is also discarded.
  • the shard is allocated a cache space address, and the shard is stored in the corresponding space according to the allocated address.
  • S305 determining, according to a preset rule, whether to update the cache threshold of the priority; if yes, proceeding to S306; if not, proceeding to S309;
  • the buffer thresholds of the three priority fragments can be dynamically adjusted.
  • the adjustment range of H-TH is between H_TH_MIN and H_TH_MAX;
  • the adjustment range of M_TH is between M_TH_MIN and M_TH_MAX;
  • the adjustment range of L_TH is between L_TH_MIN and L_TH_MAX.
  • the cache threshold of the priority adjustment is dynamically updated according to the size of the buffer space occupied by each priority GINT fragment and the cache threshold of each of the old priorities.
  • the priority cache gate is The limit value is added to the first preset value as the new priority cache threshold value, and any cache threshold value lower than the priority is subtracted from the first preset value as the new low-level priority. Cache threshold.
  • the preset rule includes that the size of the shared cache occupied by the priority corresponding to the fragment is greater than the priority cache threshold minus the first preset value, and is less than the maximum value of the priority cache threshold.
  • the received fragment is a high-priority fragment and needs to be stored in the shared buffer space, it is determined in S304, S305, and S306 whether to update the priority cache threshold according to a preset rule. ; if so, the update of the priority cache threshold includes:
  • the high priority buffer threshold is The value is subtracted from the third preset value as a new high priority cache threshold, and any cache threshold lower than the high priority is added to the third preset value as the new low priority. Cache threshold.
  • the preset rule includes that the space size of the shared cache occupied by the high priority fragment is smaller than the high priority cache threshold minus the second preset value, and is greater than the minimum value of the high priority cache threshold.
  • the second preset value may be 160
  • whether the cache threshold of the priority is updated according to a preset rule in S304, S305, and S306; if yes, the priority is The update of the cache threshold includes:
  • the size of the shared cache occupied by the high-priority fragment is smaller than the fourth preset value
  • the size of the shared cache occupied by the medium-priority fragment is smaller than the medium-priority cache threshold minus the fifth preset value.
  • the minimum value of the medium priority cache threshold is greater than the low priority cache threshold
  • the space size of the shared cache occupied by the low priority fragment is greater than the low priority cache threshold minus the sixth preset value
  • the low priority cache threshold is The sixth preset value is added as the new low priority cache threshold, and the medium priority cache threshold is subtracted from the sixth preset value as the new medium priority cache threshold.
  • the preset rule includes that the space size of the shared cache occupied by the high priority fragment is smaller than the fourth preset value, and the space size of the shared cache occupied by the medium priority fragment is smaller than the medium priority cache threshold minus the first
  • the preset value is greater than the minimum value of the medium priority cache threshold, and the space size of the shared cache occupied by the low priority fragment is greater than the low priority cache threshold minus the sixth preset value.
  • the fifth preset value may be 160
  • the sixth preset value may be 64
  • the H_GINT fragment in the shared buffer the number of M_GINT fragments is small, and the number of L_GINT fragments is large, and the following conditions are met: M_CNT ⁇ M_TH -160 and M_TH>M_TH_MIN, and L_CNT>L_TH-64
  • S307 The to-be-sent fragment in the cache space is taken out according to the first-in-first-out rule, and is sent out, and the process ends.
  • the method further includes:
  • descriptor information of the fragment where the descriptor information includes priority information, size information, and cache address information;
  • the size information refers to the size of the fragment, that is, the size of the memory
  • S307 extracts the to-be-sent fragment in the cache space according to the first-in-first-out rule, and the outgoing packet includes:
  • the fragment that enters the fragment descriptor queue is first taken out, and then enters the fragment of the fragment descriptor queue and is taken out;
  • the removed slice is added to the physical port and then sent out to the inline port.
  • a method for guaranteeing quality of a fragment by receiving a fragment, the fragment is cut into an Ethernet packet from at least one digital subscriber line port; and the fragment is parsed from the fragment
  • the priority information is determined according to the priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority, and whether the fragment is stored in the buffer space.
  • the fragment is stored in the buffer space, and it is determined according to a preset rule whether to update the priority cache threshold; if yes, the priority cache threshold is updated; according to the first in first out
  • the rule removes the to-be-sent fragment in the cache space and sends it out; using the above solution, the DSL device ensures that the high-priority fragment does not lose packets when the uplink data channel is congested, and the medium-priority fragment is as far as possible. No packet loss, low priority fragmentation can be lost to ensure the quality of service, and overcome the disorder of high and low priority fragments caused by the traditional export QoS method, thus causing packet loss. .
  • FIG. 5 is a schematic diagram of an FPGA according to the embodiment.
  • the FPGA includes:
  • the receiving module 501 is configured to receive the fragment, and the fragment is cut into an Ethernet packet from the at least one DSL port;
  • FIG. 1 is a schematic diagram of a DSL broadband access device according to the embodiment.
  • the DSL broadband device consists of a main control board and multiple lines.
  • the card is composed of a line card and a main control board connected through an inline port.
  • ITU-T G.999.1 is often referred to as the G.INT standard, and the shards defined in the G.INT standard are simply referred to as GINT fragments.
  • the FPGA aggregates GINT fragments from multiple DSL ports and sends them to the switch chip of the main control board through the inline port for reassembly.
  • the fragment received by the receiving module 501 refers to the GINT fragment.
  • the source of the GINT fragment is the uplink direction from the DSL port to the uplink port.
  • the DSL chip performs fixed-size cutting on the Ethernet packet to form the original fragment, and then according to As required by the ITU-T G.999.1 standard, a GINT header is added to the outside of these cut original slices, thereby forming a GINT slice.
  • the receiving module 501 can poll each DSL port to receive GINT fragments from each DSL port in turn.
  • the GINT fragments transmitted on the DSL port are divided into three different priorities, including a high priority fragment H_GINT, a medium priority fragment M_GINT, and a low priority fragment L_GINT; the highest priority The H_GINT fragment cannot be discarded.
  • the medium-priority M_GINT fragment is not discarded as much as possible.
  • the low-priority service L_GINT fragment can be discarded.
  • Each DSL port has high, medium and low priority Ethernet packets, and is cut and packaged into fixed length H_GINT, M_GINT, L_GINT fragments in the DSL chip; GINT fragments can be divided by the GINT header identifier. For the first fragment, the intermediate fragment and the tail fragment, respectively corresponding to the head, middle and tail of the Ethernet packet; meanwhile, the GINT header also identifies the DSL port number; that is, a GINT fragment contains the priority of the fragment.
  • Level information priority information includes high priority, medium priority, low priority
  • fragment type information includes first fragment, intermediate fragment, tail fragment
  • DSL port number information includes high, medium and low priority
  • the parsing module 502 is configured to parse the priority information of the fragment from the fragment.
  • the first determining module 503 is configured to determine whether to store the fragment according to the priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority. In the cache space;
  • Cache space includes high priority exclusive cache space and shared cache space, high priority exclusive The cache space is used to store high-priority fragments, and the shared cache space is used to store fragments of each priority; the shared cache space can store high-priority fragments H_GINT, medium-priority fragments M_GINT, and low-priority fragments. L_GINT;
  • the storage module 504 preferentially stores the high-priority fragment into the high-priority exclusive cache space, when the remaining space of the high-priority exclusive cache is insufficient. Only the high priority shards are stored in the shared cache space.
  • FIG. 4 is a schematic diagram of buffer space partitioning according to an embodiment of the present invention, wherein the size of the cache space of the high priority fragment is H_TH_MIN, and the size of the shared buffer space is SHARE_TH; defining H_GINT, M_GINT, L_GINT The buffer thresholds of the fragments are H-TH, M-TH, L-TH, and the sum of H-TH, M-TH, and L-TH is equal to SHARE_TH.
  • the GINT fragment is stored in the cache, and the value of the occupied buffer space is increased correspondingly; otherwise, the value is increased.
  • the fragment is discarded, and the non-first fragment received after the port is also discarded.
  • the storage module 504 is configured to: if the first determining module 503 determines that the fragment needs to be stored in the buffer space, the fragment is stored in the buffer space;
  • the shard is allocated a cache space address, and the shard is stored in the corresponding space according to the allocated address.
  • the second determining module 505 is configured to determine, according to the preset rule, whether to update the cache threshold of the priority
  • the update module 506 is configured to: if the second determining module 505 determines that the cache threshold of the priority needs to be updated according to the preset rule, update the cache threshold of the priority;
  • the cache thresholds of the three priority fragments can be dynamically adjusted.
  • the H-TH adjustment range is Between H_TH_MIN and H_TH_MAX; the adjustment range of M_TH is between M_TH_MIN and M_TH_MAX; the adjustment range of L_TH is between L_TH_MIN and L_TH_MAX.
  • the cache threshold of the priority adjustment is dynamically updated according to the size of the buffer space occupied by each priority GINT fragment and the cache threshold of each of the old priorities.
  • the second determining module 505 determines that the size of the shared cache occupied by the priority corresponding to the fragment is greater than
  • the priority cache threshold is subtracted from the first preset value and is less than the maximum value of the priority cache threshold, and the update module 506 adds the priority cache threshold to the first preset value as the new priority.
  • the level cache threshold, and any cache threshold lower than the priority is subtracted from the first preset value as the new cache threshold of the low priority.
  • the preset rule includes that the size of the shared cache occupied by the priority corresponding to the fragment is greater than the priority cache threshold minus the first preset value, and is less than the maximum value of the priority cache threshold.
  • the second determining module 505 determines If the size of the shared buffer occupied by the high priority fragment is smaller than the high priority cache threshold minus the second preset value and greater than the minimum value of the high priority cache threshold, the update module 506 will have the high priority.
  • the cache threshold is subtracted from the third preset value as a new high priority cache threshold, and any cache threshold lower than the high priority is added to the third preset as the new low level. Priority cache threshold.
  • the preset rule includes that the space size of the shared cache occupied by the high priority fragment is smaller than the high priority cache threshold minus the second preset value, and is greater than the minimum value of the high priority cache threshold.
  • the second preset value may be 160
  • the second determining module 505 determines that the size of the shared buffer occupied by the high priority fragment is smaller than the fourth preset.
  • the value of the shared buffer that is occupied by the medium priority fragment is smaller than the medium priority cache threshold minus the fifth preset value, and is greater than the minimum value of the medium priority cache threshold, and the low priority fragment is occupied.
  • the size of the shared buffer is greater than the low priority cache threshold minus the sixth preset value, and the update module 506 adds the low priority cache threshold to the sixth preset as the new low priority cache threshold.
  • the middle priority cache threshold is subtracted from the sixth preset value as the new medium priority cache threshold.
  • the preset rule includes that the space size of the shared cache occupied by the high priority fragment is smaller than the fourth preset value, and the space size of the shared cache occupied by the medium priority fragment is smaller than the medium priority cache threshold minus the first
  • the preset value is greater than the minimum value of the medium priority cache threshold, and the space size of the shared cache occupied by the low priority fragment is greater than the low priority cache threshold minus the sixth preset value.
  • the fifth preset value may be 160
  • the sixth preset value may be 64
  • the H_GINT fragment in the shared buffer the number of M_GINT fragments is small, and the number of L_GINT fragments is large, and the following conditions are met: M_CNT ⁇ M_TH -160 and M_TH>M_TH_MIN, and L_CNT>L_TH-64
  • the outgoing module 507 is configured to slice the to-be-sent in the cache space according to the first-in first-out rule. Take out and go out;
  • the FPGA further includes:
  • the obtaining module 508 is configured to obtain, after the storage module 504 stores the fragment into the buffer space, descriptor information of the fragment, where the descriptor information includes priority information, size information, and cache address information;
  • the size information refers to the size of the fragment, that is, the size of the memory
  • the sending module 509 is configured to send the descriptor information to the fragment descriptor queue
  • the outgoing module 507 includes:
  • Descriptor information extraction sub-module 5071 configured to retrieve descriptor information of the to-be-sent fragment from the fragment descriptor queue according to a first-in first-out rule
  • the fragment that enters the fragment descriptor queue is first taken out, and then enters the fragment of the fragment descriptor queue and is taken out;
  • the slice extraction submodule 5072 is configured to extract corresponding slices from the cache space according to the descriptor information
  • the package sub-module 5073 is configured to add the removed slice to the physical layer package
  • the outgoing sub-module 5074 is configured to issue the fragment after the encapsulation of the package sub-module.
  • An FPGA by receiving a fragment, the fragment is cut into an Ethernet packet from at least one digital subscriber line port; and the priority information of the fragment is parsed from the fragment; The priority information of the fragment, the size of the buffer space occupied by the fragment of the priority, and the buffer threshold allocated to the priority, determining whether the fragment is stored in the buffer space; if yes, The shard is stored in the cache space, and is determined according to a preset rule whether to update the cache threshold of the priority; if yes, the cache threshold of the priority is updated; according to the first-in first-out rule, the cache space is If the DSL device is congested in the upstream data channel, the DSL device ensures that the high-priority fragments are not lost, and the medium-priority fragments are not lost as much as possible. The shards can be dropped to ensure the quality of service. At the same time, the problem of out-of-order and high-priority fragmentation caused by the traditional egress QoS method
  • the modules or steps of the above embodiments of the present invention can be implemented by a general-purpose computing device. They can be centralized on a single computing device or distributed over a network of multiple computing devices, which can be implemented in program code executable by the computing device so that they can be stored in a storage medium (ROM/RAM, The magnetic disk, the optical disk is executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that herein, or they may be separately fabricated into individual integrated circuit modules, or they may be Multiple modules or steps in the fabrication are implemented as a single integrated circuit module. Therefore, the application is not limited to any particular combination of hardware and software.
  • the high-priority fragment when the DSL device is congested in the uplink data channel, the high-priority fragment is not lost, and the medium-priority fragment is not lost as much as possible, and the low-priority fragment can be lost. service quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种分片的QoS保证方法及FPGA,通过接收分片,分片为来自至少一个DSL端口的以太网包切割而成;从该分片中解析出该分片的优先级信息;根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新;根据先入先出的规则将缓存空间中的待发送分片取出,并外发。

Description

一种分片的服务质量保证方法及现场可编程逻辑门阵列 技术领域
本申请涉及但不限于宽带接入数据通信领域,尤指一种分片的服务质量保证方法及现场可编程逻辑门阵列。
背景技术
DSL(Digital Subscriber Line,数字用户线路)是一种宽带接入技术,通过传统的电话通双绞线,可以传送高达2Gbit/s的数据。通常的DSL接入设备的架构如图1所示。DSL设备由一个主控板和多个线卡构成,线卡上有DSL芯片和FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列),主控板上有交换芯片。在从DSL端口到上联口的上行方向,DSL芯片对以太网包进行固定大小的切割,形成原始分片,然后按照ITU-TG.999.1标准的要求,在这些切割的原始分片外部加上GINT头(ITU-TG.999.1通常简称G.INT标准,把G.INT标准中定义的分片简称为GINT分片;G表示ITU标准中的G系列标准,INT是英文Interface的缩写),形成GINT分片。FPGA收到来自DSL芯片的每个端口的GINT分片后,在芯片内部存储、汇聚后通过内联口发送到主控板的交换芯片。交换芯片把来自每个DSL端口的GINT分片重组成以太网包,并经过处理后再经过上联口发送出DSL设备。
DSL端口上传输的GINT分片分为三种不同的优先级。优先级最高的H_GINT分片不能丢弃;优先级中等的业务M_GINT分片是尽量不丢弃;优先级低的业务L_GINT分片可以丢弃。
由于DSL设备是一种数据汇聚型的设备,线卡内联口的有效带宽通常小于每个DSL端口总的带宽和。这样,内联口出口处就会形成拥塞,造成GINT分片丢失。为了保证最高优先级的H_GINT分片不被丢失,中等优先级的业务M_GINT分片尽量不丢失,需要在FPGA芯片上实施服务质量QoS(Quality of Service,服务质量)。如图2所示,图2是在图1的基础上,在FPGA芯片上实施QoS,FPGA芯片包括接口处理模块、分片解析分类模 块、分片描述符队列、队列调度模块、分片发送模块,现在通用的QoS方法是出口QoS,其思路是接口处理模块接收每个端口发来的分片,分片解析分类模块根据分片的优先级信息将分片进行解析分类,在出内联口之前设置三个队列,并且把三个队列分别设置为高、中、低三种优先级,对应存放H、M、L三个优先级的GINT分片。队列后面的队列调度模块把队列中的GINT分片按照优先级调度出来,通过后级的分片发送模块送往内联口。队列调度模块在调度时,先把高优先级队列的H_GINT分片调度完了,再调度中优先级队列中的M_GINT分片,最后调度低优先级队列中的L_GINT分片。每个DSL端口都有高、中、低优先级的以太网包,并且在DSL芯片中被切割、封装成长度固定的H_GINT、M_GINT、L_GINT分片。GINT分片通过GINT头的标识,可以分为首分片、中间分片和尾分片,分别对应以太网包的头部、中间以及尾部;同时,GINT头中也标识了DSL端口号。被调度出的GINT分片从内联口进入主控板的交换芯片后,按照端口被重组成以太网包。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
当DSL端口中先收到低优先级的以太网包,经过处理后形成L_GINT分片进入FPGA的低优先级队列。此时高优先级队列是空的,于是队列调度模块就开始把L_GINT分片调度到主控板的交换芯片进行重组。此时,交换芯片的重组模块收到了首L_GINT分片和中间L_GINT分片,但尚未收到尾L_GINT分片。当低优先级队列中的L_GINT分片尚未调度结束时,来自同一个DSL端口的高优先级报文也被切割成H_GINT分片,进入FPGA的高优先级队列。于是队列调度模块开始把H_GINT分片调度出来,送到交换芯片的重组模块进行重组。此时,对于某个DSL端口而言,交换芯片的重组模块先后收到了首L_GINT分片、中间L_GINT分片、首H_GINT分片、中间H_GINT分片以及尾H_GINT分片。重组模块进行重组时,会把 首分片和尾分片之间的所有分片重组成一个以太网包。由于这些首、尾分片分别来自不同的以太网包,最终重组会出现失败,导致高优先级的以太网包被丢弃。因此,采用传统的出口QoS方法进行处理,会在主控的交换芯片出现GINT分片乱序、错包、丢包、无法在DSL设备上使用的情况。
本发明实施例提供一种分片的服务质量保证方法及现场可编程逻辑门阵列,以避免在主控的交换芯片出现分片乱序、错包、丢包、无法在DSL设备上使用的情况。
本发明实施例提供一种分片的服务质量保证方法,包括:
接收分片,分片为来自至少一个数字用户线路端口的以太网包切割而成;
从该分片中解析出该分片的优先级信息;
根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;
若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;
若是,则对优先级的缓存门限值进行更新;
根据先入先出的规则将缓存空间中的待发送分片取出,并外发。
本发明实施例提供一种现场可编程逻辑门阵列,包括:
接收模块,设置为接收分片,分片为来自至少一个数字用户线路端口的以太网包切割而成;
解析模块,设置为从该分片中解析出该分片的优先级信息;
第一判断模块,设置为根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;
存储模块,设置为若第一判断模块判断需要将该分片存入缓存空间中,则将该分片存入缓存空间中;
第二判断模块,设置为根据预设规则判断是否对优先级的缓存门限值 进行更新;
更新模块,设置为若第二判断模块根据预设规则判断需要对优先级的缓存门限值进行更新,则对优先级的缓存门限值进行更新;
外发模块,设置为根据先入先出的规则将缓存空间中的待发送分片取出,并外发。
本发明实施例还提供一种计算机存储介质,计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于执行前述的分片的服务质量保证方法。
根据本发明实施例提供的一种分片的服务质量保证方法及现场可编程逻辑门阵列,通过接收分片,分片为来自至少一个数字用户线路端口的以太网包切割而成;从该分片中解析出该分片的优先级信息;根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新;根据先入先出的规则将缓存空间中的待发送分片取出,并外发;采用上述方案,使得DSL设备在上行数据通道发生拥塞时,保证高优先级的分片不丢包,中优先级的分片尽量不丢包,低优先级分片可以丢包,以保证服务质量,同时克服了采用传统的出口QoS方法导致的高低优先级分片的乱序、从而丢包的情况。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为DSL宽带接入设备的示意图;
图2为出口QoS的宽带接入设备的示意图;
图3为本发明实施例一提供的一种分片的服务质量保证方法的流程图;
图4为本发明实施例一、二提供的一种缓存空间划分的示意图;
图5为本发明实施例二提供的一种FPGA的示意图。
详述
下面通过实施方式结合附图对本发明实施例作进一步详细说明。
实施例一
本实施例提供一种分片的服务质量保证方法,请参见图3,图3为本实施例提供的一种分片的服务质量保证方法的流程图,该方法包括以下步骤:
S301:接收分片,分片为来自至少一个DSL端口的以太网包切割而成;
本实施例提供的分片的服务质量保证方法可以采用FPGA来实现。FPGA在整个系统中的位置和作用如图1所示。图1为本实施例提供的一种DSL宽带接入设备架构图;DSL宽带设备由一个主控板和多个线卡组成,线卡和主控板之间通过内联口连接。线卡上有多个DSL芯片,这些DSL芯片有多个DSL端口。来自多个DSL端口的以太网包进入DSL芯片后,被切割成分片,然后按照ITU-T G.999.1标准的要求,在这些切割的分片外部加上GINT头,成为GINT分片被送到FPGA;ITU-T G.999.1通常简称G.INT标准,把G.INT标准中定义的分片简称为GINT分片。FPGA把来自多个DSL端口的GINT分片汇聚后,通过内联口送到主控板的交换芯片进行重组。
S301中的分片是指GINT分片,GINT分片的来源为在从DSL端口到上联口的上行方向,DSL芯片对以太网包进行固定大小的切割,形成原始分片,然后按照ITU-T G.999.1标准的要求,在这些切割的原始分片外部加上GINT头,由此便形成GINT分片。
可以对每个DSL端口进行轮询,轮流接收来自每个DSL端口的GINT分片。
在一种实施方式中,DSL端口上传输的GINT分片分为三种不同的优先级,包括高优先级分片H_GINT、中优先级分片M_GINT、低优先级分片L_GINT;优先级最高的H_GINT分片不能丢弃,优先级中等的业务 M_GINT分片是尽量不丢弃,优先级低的业务L_GINT分片可以丢弃。
每个DSL端口都有高、中、低优先级的以太网包,并且在DSL芯片中被切割、封装成长度固定的H_GINT、M_GINT、L_GINT分片;GINT分片通过GINT头的标识,可以分为首分片、中间分片和尾分片,分别对应以太网包的头部、中间以及尾部;同时,GINT头中也标识了DSL端口号;也即一个GINT分片中包含该分片的优先级信息(优先级信息包括高优先级、中优先级、低优先级)、分片类型信息(分片类型信息包括首分片、中间分片、尾分片)、以及DSL端口号信息。
S302:从该分片中解析出该分片的优先级信息;
在接收到分片后,从该分片中解析出该分片是高优先级分片H_GINT、还是中优先级分片M_GINT、还是低优先级分片L_GINT;
S303:根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;若是,则进入S304,若否,则进入S308;
缓存空间包括高优先级独享缓存空间和共享缓存空间,高优先级独享缓存空间用于存储高优先级的分片,共享缓存空间用于存储每个优先级的分片;共享缓存空间可以存储高优先级分片H_GINT、中优先级分片M_GINT以及低优先级分片L_GINT;
当接收到的分片为高优先级分片时,优先存入高优先级独享缓存空间,当高优先级独享缓存的剩余空间不足时,才将高优先级分片存入共享缓存空间。
参见图4,图4为本实施例提供的一种缓存空间划分的示意图,其中,高优先级分片独享缓存空间的大小是H_TH_MIN,共享缓存空间的大小是SHARE_TH;定义H_GINT、M_GINT、L_GINT分片的缓存门限值分别为H-TH、M-TH、L-TH,H-TH、M-TH、L-TH三者之和等于SHARE_TH。
若该优先级的分片已占用的缓存空间大小小于分配给该优先级的缓存门限值,则将该GINT分片存入缓存,且已占用的缓存空间大小的数值对应增加;否则把该分片丢弃,该端口后面收到的非首分片也会被丢弃。
除了根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,来判断是否将该分片存入缓存空间中之外,还可以根据该分片的优先级信息、该优先级的分片已经占用的缓存计数器计数值、以及分配给该优先级的缓存门限值,来判断是否将该分片存入缓存空间中;在该GINT分片存入缓存之后,占用缓存计数器加1。
S304:将该分片存入缓存空间中;
若判断出将该分片可以存入缓存空间中,则给该分片分配缓存空间地址,按照分配的地址,把该分片存储到相应的空间。
S305:根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则进入S306,若否,则进入S309;
S306:若是,则对优先级的缓存门限值进行更新;
可以动态调整三种优先级分片的缓存门限值,H-TH的调整范围在H_TH_MIN、H_TH_MAX之间;M_TH的调整范围在M_TH_MIN、M_TH_MAX之间;L_TH的调整范围在L_TH_MIN、L_TH_MAX之间。
根据每个优先级GINT分片已占用的缓存空间大小以及旧的每个优先级的缓存门限值,动态更新调整优先级的缓存门限值。
在一种实施方式中,若接收到的分片需要存入共享缓存空间,S304、S305以及S306中根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
若该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值,则将该优先级缓存门限值加上第一预设值作为新的该优先级缓存门限值,同时将任何一个低于该优先级的缓存门限值减去第一预设值作为新的该低级别优先级的缓存门限值。
其中,预设规则包括该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值。
例如,第一预设值可以为64,若当前共享缓存中存放的H_GINT分片数量H_CNT>H_TH-64且H_CNT<H_TH_MAX,则把H_GINT分片的可用缓存门限值H_TH增加64,同时把M_GINT分片的缓存门限值减少64或者把L_GINT分片的缓存门限值减少64,即H_TH=H_TH_old+64,M_TH=M_TH_old-64或者L_TH=L_TH_old-64。
若当前共享缓存中M_GINT分片的数量M_CNT>M_TH-64且M_CNT<M_TH_MAX,则增加M_GINT分片的缓存门限值,同时减少L_GINT分片的缓存门限值,即M_TH=M_TH_old+64,L_TH=L_TH_old–64。
在一种实施方式中,若接收到的分片为高优先级分片,且需要存入共享缓存空间,S304、S305以及S306中根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
若该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值,则将高优先级缓存门限值减去第三预设值作为新的高优先级缓存门限值,同时将任何一个低于该高优先级的缓存门限值加上第三预设值作为新的该低级别优先级的缓存门限值。
其中,预设规则包括该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值。
例如第二预设值可以为160,第三预设值可以为64,若当前共享缓存中H_CNT<H_TH-160并且H_CNT>H_TH_MIN,则把H_GINT分片的缓存门限值减少64,同时把M_GINT分片的缓存门限值增加64或者把L_GINT分片的缓存门限值增加64,即H_TH=H_TH_old-64,M_TH=M_TH_old+64或者L_TH=L_TH_old+64。否则,维持已有数值。
在一种实施方式中,当接收到的分片为低优先级分片时,S304、S305以及S306中根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
若高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值,则将低优先级缓存门限值加上第六预设值作为新的低优先级缓存门限值,同时将中优先级缓存门限值减去第六预设值作为新的中优先级缓存门限值。
其中,预设规则包括高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值。
例如第五预设值可以为160,第六预设值可以为64,共享缓存中的H_GINT分片、M_GINT分片数量很少,同时L_GINT分片数量较多,且符合以下条件:M_CNT<M_TH-160且M_TH>M_TH_MIN,且L_CNT>L_TH-64,则把L_GINT分片的缓存门限值按照如下方式提高,同时降低M_GINT分片的缓存门限值:M_TH=M_TH_old-64,L_TH=L_old+64。如果上述条件都不满足,则维持中优先级和低优先级分片的缓存门限值不变。
S307:根据先入先出的规则将缓存空间中的待发送分片取出,并外发,流程结束。
在S304的将该分片存入缓存空间中之后,还包括:
获取该分片的描述符信息,描述符信息包括优先级信息、大小信息以及缓存地址信息;
大小信息是指该分片的大小,即所占内存大小;
将描述符信息送入分片描述符队列;
S307根据先入先出的规则将缓存空间中的待发送分片取出,并外发包括:
根据先入先出的规则从分片描述符队列中取出待发送分片的描述符信 息;
先进入分片描述符队列的分片先取出,后进入分片描述符队列的分片后取出;
根据描述符信息从缓存空间中取出对应的分片;
将取出的分片加上物理层封装后外发到内联口。
S308:将该分片丢弃。
S309:不更新。
根据本发明实施例提供的一种分片的服务质量保证方法,通过接收分片,分片为来自至少一个数字用户线路端口的以太网包切割而成;从该分片中解析出该分片的优先级信息;根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新;根据先入先出的规则将缓存空间中的待发送分片取出,并外发;采用上述方案,使得DSL设备在上行数据通道发生拥塞时,保证高优先级的分片不丢包,中优先级的分片尽量不丢包,低优先级分片可以丢包,以保证服务质量,同时克服了采用传统的出口QoS方法导致的高低优先级分片的乱序、从而丢包的问题。
实施例二
本实施例提供一种FPGA,请参见图5,图5为本实施例提供的一种FPGA的示意图,该FPGA包括:
接收模块501,设置为接收分片,分片为来自至少一个DSL端口的以太网包切割而成;
本实施例提供的FPGA在整个系统中的位置和作用如图1所示,图1为本实施例提供的一种DSL宽带接入设备架构图;DSL宽带设备由一个主控板和多个线卡组成,线卡和主控板之间通过内联口连接。线卡上有多个DSL芯片,这些DSL芯片有多个DSL端口。来自多个DSL端口的以太网包进入DSL芯片后,被切割成分片,然后按照ITU-T G.999.1标准的要求, 在这些切割的分片外部加上GINT头,成为GINT分片被送到FPGA;ITU-T G.999.1通常简称G.INT标准,把G.INT标准中定义的分片简称为GINT分片。FPGA把来自多个DSL端口的GINT分片汇聚后,通过内联口送到主控板的交换芯片进行重组。
接收模块501接收的分片是指GINT分片,GINT分片的来源为在从DSL端口到上联口的上行方向,DSL芯片对以太网包进行固定大小的切割,形成原始分片,然后按照ITU-T G.999.1标准的要求,在这些切割的原始分片外部加上GINT头,由此便形成GINT分片。
接收模块501可以对每个DSL端口进行轮询,轮流接收来自每个DSL端口的GINT分片。
在一种实施方式中,DSL端口上传输的GINT分片分为三种不同的优先级,包括高优先级分片H_GINT、中优先级分片M_GINT、低优先级分片L_GINT;优先级最高的H_GINT分片不能丢弃,优先级中等的业务M_GINT分片是尽量不丢弃,优先级低的业务L_GINT分片可以丢弃。
每个DSL端口都有高、中、低优先级的以太网包,并且在DSL芯片中被切割、封装成长度固定的H_GINT、M_GINT、L_GINT分片;GINT分片通过GINT头的标识,可以分为首分片、中间分片和尾分片,分别对应以太网包的头部、中间以及尾部;同时,GINT头中也标识了DSL端口号;也即一个GINT分片中包含该分片的优先级信息(优先级信息包括高优先级、中优先级、低优先级)、分片类型信息(分片类型信息包括首分片、中间分片、尾分片)、以及DSL端口号信息。
解析模块502,设置为从该分片中解析出该分片的优先级信息;
在接收到分片后,从该分片中解析出该分片是高优先级分片H_GINT、还是中优先级分片M_GINT、还是低优先级分片L_GINT;
第一判断模块503,设置为根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;
缓存空间包括高优先级独享缓存空间和共享缓存空间,高优先级独享 缓存空间用于存储高优先级的分片,共享缓存空间用于存储每个优先级的分片;共享缓存空间可以存储高优先级分片H_GINT、中优先级分片M_GINT以及低优先级分片L_GINT;
当接收模块501接收到的分片为高优先级分片时,存储模块504将该高优先级分片优先存入高优先级独享缓存空间,当高优先级独享缓存的剩余空间不足时,才将高优先级分片存入共享缓存空间。
参见图4,图4为本实施例提供的一种缓存空间划分的示意图,其中,高优先级分片独享缓存空间的大小是H_TH_MIN,共享缓存空间的大小是SHARE_TH;定义H_GINT、M_GINT、L_GINT分片的缓存门限值分别为H-TH、M-TH、L-TH,H-TH、M-TH、L-TH三者之和等于SHARE_TH。
若该优先级的分片已占用的缓存空间大小小于分配给该优先级的缓存门限值,则将该GINT分片存入缓存,且已占用的缓存空间大小的数值对应增加;否则把该分片丢弃,该端口后面收到的非首分片也会被丢弃。
除了根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,来判断是否将该分片存入缓存空间中之外,还可以根据该分片的优先级信息、该优先级的分片已经占用的缓存计数器计数值、以及分配给该优先级的缓存门限值,来判断是否将该分片存入缓存空间中;在该GINT分片存入缓存之后,占用缓存计数器加1。
存储模块504,设置为若第一判断模块503判断需要将该分片存入缓存空间中,则将该分片存入缓存空间中;
若判断出将该分片可以存入缓存空间中,则给该分片分配缓存空间地址,按照分配的地址,把该分片存储到相应的空间。
第二判断模块505,设置为根据预设规则判断是否对优先级的缓存门限值进行更新;
更新模块506,设置为若第二判断模块505根据预设规则判断需要对优先级的缓存门限值进行更新,则对优先级的缓存门限值进行更新;
可以动态调整三种优先级分片的缓存门限值,H-TH的调整范围在 H_TH_MIN、H_TH_MAX之间;M_TH的调整范围在M_TH_MIN、M_TH_MAX之间;L_TH的调整范围在L_TH_MIN、L_TH_MAX之间。
根据每个优先级GINT分片已占用的缓存空间大小以及旧的每个优先级的缓存门限值,动态更新调整优先级的缓存门限值。
在一种实施方式中,若第一判断模块503判断出接收到的分片需要存入共享缓存空间,第二判断模块505判断出该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值,则更新模块506将该优先级缓存门限值加上第一预设值作为新的该优先级缓存门限值,同时将任何一个低于该优先级的缓存门限值减去第一预设值作为新的该低级别优先级的缓存门限值。
其中,预设规则包括该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值。
例如,第一预设值可以为64,若当前共享缓存中存放的H_GINT分片数量H_CNT>H_TH-64且H_CNT<H_TH_MAX,则把H_GINT分片的可用缓存门限值H_TH增加64,同时把M_GINT分片的缓存门限值减少64或者把L_GINT分片的缓存门限值减少64,即H_TH=H_TH_old+64,M_TH=M_TH_old-64或者L_TH=L_TH_old-64。
若当前共享缓存中M_GINT分片的数量M_CNT>M_TH-64且M_CNT<M_TH_MAX,则增加M_GINT分片的缓存门限值,同时减少L_GINT分片的缓存门限值,即M_TH=M_TH_old+64,L_TH=L_TH_old–64。
在一种实施方式中,若接收模块501接收到的分片为高优先级分片,且第一判断模块503判断出该高优先级分片需要存入共享缓存空间,第二判断模块505判断出该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值,则更新模块506将高优先级缓存门限值减去第三预设值作为新的高优先级缓存门限值,同时将任何一个低于该高优先级的缓存门限值加上第三预设值作为新的该低级别优先级的缓存门限值。
其中,预设规则包括该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值。
例如第二预设值可以为160,第三预设值可以为64,若当前共享缓存中H_CNT<H_TH-160并且H_CNT>H_TH_MIN,则把H_GINT分片的缓存门限值减少64,同时把M_GINT分片的缓存门限值增加64或者把L_GINT分片的缓存门限值增加64,即H_TH=H_TH_old-64,M_TH=M_TH_old+64或者L_TH=L_TH_old+64。否则,维持已有数值。
在一种实施方式中,当接收模块501接收到的分片为低优先级分片时,若第二判断模块505判断出高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值,则更新模块506将低优先级缓存门限值加上第六预设值作为新的低优先级缓存门限值,同时将中优先级缓存门限值减去第六预设值作为新的中优先级缓存门限值。
其中,预设规则包括高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值。
例如第五预设值可以为160,第六预设值可以为64,共享缓存中的H_GINT分片、M_GINT分片数量很少,同时L_GINT分片数量较多,且符合以下条件:M_CNT<M_TH-160且M_TH>M_TH_MIN,且L_CNT>L_TH-64,则把L_GINT分片的缓存门限值按照如下方式提高,同时降低M_GINT分片的缓存门限值:M_TH=M_TH_old-64,L_TH=L_old+64。如果上述条件都不满足,则维持中优先级和低优先级分片的缓存门限值不变。
外发模块507,设置为根据先入先出的规则将缓存空间中的待发送分片 取出,并外发;
在一实施方式中,该FPGA还包括:
获取模块508,设置为在存储模块504将该分片存入缓存空间中之后,获取该分片的描述符信息,描述符信息包括优先级信息、大小信息以及缓存地址信息;
大小信息是指该分片的大小,即所占内存大小;
送入模块509,设置为将描述符信息送入分片描述符队列;
外发模块507包括:
描述符信息取出子模块5071,设置为根据先入先出的规则从分片描述符队列中取出待发送分片的描述符信息;
先进入分片描述符队列的分片先取出,后进入分片描述符队列的分片后取出;
分片取出子模块5072,设置为根据描述符信息从缓存空间中取出对应的分片;
封装子模块5073,设置为将取出的分片加上物理层封装;
外发子模块5074,设置为将封装子模块进行封装后的分片外发。
根据本发明实施例提供的一种FPGA,通过接收分片,分片为来自至少一个数字用户线路端口的以太网包切割而成;从该分片中解析出该分片的优先级信息;根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新;根据先入先出的规则将缓存空间中的待发送分片取出,并外发;采用上述方案,使得DSL设备在上行数据通道发生拥塞时,保证高优先级的分片不丢包,中优先级的分片尽量不丢包,低优先级分片可以丢包,以保证服务质量,同时克服了采用传统的出口QoS方法导致的高低优先级分片的乱序、从而丢包的问题。
上述本发明实施例的各模块或各步骤可以用通用的计算装置来实现, 它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本申请不限制于任何特定的硬件和软件结合。
以上内容是结合实施方式对本发明实施例所作的进一步详细说明,不能认定本申请的实施只局限于这些说明。对于本申请所属技术领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本申请的保护范围。
工业实用性
通过本发明实施例,使得DSL设备在上行数据通道发生拥塞时,保证高优先级的分片不丢包,中优先级的分片尽量不丢包,低优先级分片可以丢包,以保证服务质量。

Claims (13)

  1. 一种分片的服务质量保证方法,包括:
    接收分片,所述分片为来自至少一个数字用户线路端口的以太网包切割而成;
    从该分片中解析出该分片的优先级信息;
    根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;
    若是,则将该分片存入缓存空间中,并根据预设规则判断是否对优先级的缓存门限值进行更新;
    若是,则对优先级的缓存门限值进行更新;
    根据先入先出的规则将缓存空间中的待发送分片取出,并外发。
  2. 如权利要求1所述的分片的服务质量保证方法,其中,所述缓存空间包括高优先级独享缓存空间和共享缓存空间,所述高优先级独享缓存空间用于存储高优先级的分片,所述共享缓存空间用于存储每个优先级的分片;
    当接收到的分片为高优先级分片时,优先存入所述高优先级独享缓存空间。
  3. 如权利要求2所述的分片的服务质量保证方法,其中,
    若接收到的分片需要存入共享缓存空间,所述根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
    若该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值,则将该优先级缓存门限值加上第一预设值作为新的该优先级缓存门限值,同时将任何一个低于该优先级的缓存门限值减去第一预设值作为新的该低级别优先级的缓存门限值。
  4. 如权利要求2所述的分片的服务质量保证方法,其中,
    若接收到的分片为高优先级分片,且需要存入共享缓存空间,所述根据 预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
    若该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值,则将高优先级缓存门限值减去第三预设值作为新的高优先级缓存门限值,同时将任何一个低于该高优先级的缓存门限值加上第三预设值作为新的该低级别优先级的缓存门限值。
  5. 如权利要求2所述的分片的服务质量保证方法,其中,
    当接收到的分片为低优先级分片时,所述根据预设规则判断是否对优先级的缓存门限值进行更新;若是,则对优先级的缓存门限值进行更新包括:
    若高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值,则将低优先级缓存门限值加上第六预设值作为新的低优先级缓存门限值,同时将中优先级缓存门限值减去第六预设值作为新的中优先级缓存门限值。
  6. 如权利要求1至5任一项所述的分片的服务质量保证方法,其中,
    在所述将该分片存入缓存空间中之后,还包括:
    获取该分片的描述符信息,所述描述符信息包括优先级信息、大小信息以及缓存地址信息;
    将所述描述符信息送入分片描述符队列;
    所述根据先入先出的规则将缓存空间中的待发送分片取出,并外发包括:
    根据先入先出的规则从所述分片描述符队列中取出待发送分片的描述符信息;
    根据所述描述符信息从缓存空间中取出对应的分片;
    将取出的分片加上物理层封装后外发。
  7. 一种现场可编程逻辑门阵列,包括:
    接收模块,设置为接收分片,所述分片为来自至少一个数字用户线路端口的以太网包切割而成;
    解析模块,设置为从该分片中解析出该分片的优先级信息;
    第一判断模块,设置为根据该分片的优先级信息、该优先级的分片已占用的缓存空间大小、以及分配给该优先级的缓存门限值,判断是否将该分片存入缓存空间中;
    存储模块,设置为若所述第一判断模块判断需要将该分片存入缓存空间中,则将该分片存入缓存空间中;
    第二判断模块,设置为根据预设规则判断是否对优先级的缓存门限值进行更新;
    更新模块,设置为若所述第二判断模块根据预设规则判断需要对优先级的缓存门限值进行更新,则对优先级的缓存门限值进行更新;
    外发模块,设置为根据先入先出的规则将缓存空间中的待发送分片取出,并外发。
  8. 如权利要求7所述的现场可编程逻辑门阵列,其中,所述缓存空间包括高优先级独享缓存空间和共享缓存空间,所述高优先级独享缓存空间用于存储高优先级的分片,所述共享缓存空间用于存储每个优先级的分片;
    所述存储模块设置为:当所述接收模块接收到的分片为高优先级分片时,将该高优先级分片优先存入所述高优先级独享缓存空间。
  9. 如权利要求8所述的现场可编程逻辑门阵列,其中,
    所述更新模块设置为:若所述第一判断模块判断出接收到的分片需要存入共享缓存空间,所述第二判断模块判断出该分片对应的优先级已占用的共享缓存的空间大小大于该优先级缓存门限值减去第一预设值,且小于该优先级缓存门限的最大值,则将该优先级缓存门限值加上第一预设值作为新的该优先级缓存门限值,同时将任何一个低于该优先级的缓存门限值减去第一预设值作为新的该低级别优先级的缓存门限值。
  10. 如权利要求8所述的现场可编程逻辑门阵列,其中,
    所述更新模块设置为:若所述接收模块接收到的分片为高优先级分片,且所述第一判断模块判断出该高优先级分片需要存入共享缓存空间,所述第二判断模块判断出该高优先级分片已占用的共享缓存的空间大小小于高优先级缓存门限值减去第二预设值,且大于高优先级缓存门限的最小值,则将高优先级缓存门限值减去第三预设值作为新的高优先级缓存门限值,同时将任何一个低于该高优先级的缓存门限值加上第三预设值作为新的该低级别优先级的缓存门限值。
  11. 如权利要求8所述的现场可编程逻辑门阵列,其中,
    所述更新模块设置为:若所述接收模块接收到的分片为低优先级分片,所述第二判断模块判断出高优先级分片已占用的共享缓存的空间大小小于第四预设值,中优先级分片已占用的共享缓存的空间大小小于中优先级缓存门限值减去第五预设值,且大于中优先级缓存门限的最小值,且低优先级分片已占用的共享缓存的空间大小大于低优先级缓存门限值减去第六预设值,则将低优先级缓存门限值加上第六预设值作为新的低优先级缓存门限值,同时将中优先级缓存门限值减去第六预设值作为新的中优先级缓存门限值。
  12. 如权利要求7至11任一项所述的现场可编程逻辑门阵列,还包括:
    获取模块,设置为在所述存储模块将该分片存入缓存空间中之后,获取该分片的描述符信息,所述描述符信息包括优先级信息、大小信息以及缓存地址信息;
    送入模块,设置为将所述描述符信息送入分片描述符队列;
    所述外发模块包括:
    描述符信息取出子模块,设置为根据先入先出的规则从所述分片描述符队列中取出待发送分片的描述符信息;
    分片取出子模块,设置为根据所述描述符信息从缓存空间中取出对应的分片;
    封装子模块,设置为将取出的分片加上物理层封装;
    外发子模块,设置为将所述封装子模块进行封装后的分片外发。
  13. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1-6任一项的分片的服务质量保证方法。
PCT/CN2017/098322 2017-03-21 2017-08-21 一种分片的服务质量保证方法及现场可编程逻辑门阵列 WO2018171115A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710169735.0A CN108632169A (zh) 2017-03-21 2017-03-21 一种分片的服务质量保证方法及现场可编程逻辑门阵列
CN201710169735.0 2017-03-21

Publications (1)

Publication Number Publication Date
WO2018171115A1 true WO2018171115A1 (zh) 2018-09-27

Family

ID=63584849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/098322 WO2018171115A1 (zh) 2017-03-21 2017-08-21 一种分片的服务质量保证方法及现场可编程逻辑门阵列

Country Status (2)

Country Link
CN (1) CN108632169A (zh)
WO (1) WO2018171115A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355673A (zh) * 2018-12-24 2020-06-30 深圳市中兴微电子技术有限公司 一种数据处理方法、装置、设备及存储介质
CN110232029B (zh) * 2019-06-19 2021-06-29 成都博宇利华科技有限公司 一种基于索引的fpga中ddr4包缓存的实现方法
CN111563060A (zh) * 2020-07-13 2020-08-21 南京艾科朗克信息科技有限公司 一种基于fpga报单管理的共享缓存方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938406A (zh) * 2009-07-02 2011-01-05 华为技术有限公司 微波多通道报文发送方法和装置及传送系统
CN102025638A (zh) * 2010-12-21 2011-04-20 福建星网锐捷网络有限公司 基于优先级的数据传输方法、装置及网络设备
CN102594691A (zh) * 2012-02-23 2012-07-18 中兴通讯股份有限公司 一种处理报文的方法及装置
WO2014015498A1 (zh) * 2012-07-26 2014-01-30 华为技术有限公司 报文发送方法、接收方法、装置及系统
CN106027416A (zh) * 2016-05-23 2016-10-12 北京邮电大学 一种基于时空结合的数据中心网络流量调度方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6633835B1 (en) * 2002-01-10 2003-10-14 Networks Associates Technology, Inc. Prioritized data capture, classification and filtering in a network monitoring environment
CN100531114C (zh) * 2004-10-22 2009-08-19 中兴通讯股份有限公司 Dsl系统中对以太网数据包进行分段处理的装置
CN103581055B (zh) * 2012-08-08 2016-12-21 华为技术有限公司 报文的保序方法、流量调度芯片及分布式存储系统
CN102833159B (zh) * 2012-08-16 2015-10-21 中兴通讯股份有限公司 报文拥塞处理方法及装置
CN106385386B (zh) * 2016-08-31 2019-04-12 成都飞鱼星科技股份有限公司 应用随动智能流量控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938406A (zh) * 2009-07-02 2011-01-05 华为技术有限公司 微波多通道报文发送方法和装置及传送系统
CN102025638A (zh) * 2010-12-21 2011-04-20 福建星网锐捷网络有限公司 基于优先级的数据传输方法、装置及网络设备
CN102594691A (zh) * 2012-02-23 2012-07-18 中兴通讯股份有限公司 一种处理报文的方法及装置
WO2014015498A1 (zh) * 2012-07-26 2014-01-30 华为技术有限公司 报文发送方法、接收方法、装置及系统
CN106027416A (zh) * 2016-05-23 2016-10-12 北京邮电大学 一种基于时空结合的数据中心网络流量调度方法及系统

Also Published As

Publication number Publication date
CN108632169A (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
EP3573297B1 (en) Packet processing method and apparatus
EP1551136B1 (en) Hierarchical flow-characterizing multiplexor
US6650652B1 (en) Optimizing queuing of voice packet flows in a network
US7602809B2 (en) Reducing transmission time for data packets controlled by a link layer protocol comprising a fragmenting/defragmenting capability
EP1495596B1 (en) Reducing packet data delay variation
US9887938B1 (en) Enhanced audio video bridging (AVB) methods and apparatus
WO2016033970A1 (zh) 流量管理实现方法、装置和网络设备
US10764410B2 (en) Method and apparatus for processing packets in a network device
US20050013251A1 (en) Flow control hub having scoreboard memory
WO2018171115A1 (zh) 一种分片的服务质量保证方法及现场可编程逻辑门阵列
CN109246036B (zh) 一种处理分片报文的方法和装置
US8514700B2 (en) MLPPP occupancy based round robin
US7321557B1 (en) Dynamic latency assignment methodology for bandwidth optimization of packet flows
EP3032785B1 (en) Transport method in a communication network
US7295564B2 (en) Virtual output queue (VoQ) management method and apparatus
US11206219B2 (en) Method and apparatus for managing transport of delay-sensitive packets
US7830797B1 (en) Preserving packet order for data flows when applying traffic shapers
CN111740922B (zh) 数据传输方法、装置、电子设备及介质
US6418118B1 (en) Network device including selective discard of packets
US20070263542A1 (en) Method for Transmitting Data Available in the Form of Data Packets
US20140016486A1 (en) Fabric Cell Packing in a Switch Device
US11240169B2 (en) Packet queuing
CN114157399B (zh) 一种tcp反馈包ack包的优化方法、装置以及系统
EP1202508A1 (en) Dynamic fragmentation of information
EP3866417A1 (en) Method for an improved traffic shaping and/or management of ip traffic in a packet processing system, telecommunications network, network node or network element, program and computer program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17902064

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17902064

Country of ref document: EP

Kind code of ref document: A1