WO2013189364A1 - 一种处理报文的方法和装置 - Google Patents

一种处理报文的方法和装置 Download PDF

Info

Publication number
WO2013189364A1
WO2013189364A1 PCT/CN2013/081778 CN2013081778W WO2013189364A1 WO 2013189364 A1 WO2013189364 A1 WO 2013189364A1 CN 2013081778 W CN2013081778 W CN 2013081778W WO 2013189364 A1 WO2013189364 A1 WO 2013189364A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
information
multicast
message
packet
Prior art date
Application number
PCT/CN2013/081778
Other languages
English (en)
French (fr)
Inventor
高继伟
黄炜
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP13807242.6A priority Critical patent/EP2830269B1/en
Priority to ES13807242.6T priority patent/ES2684559T3/es
Priority to JP2014560246A priority patent/JP5892500B2/ja
Priority to RU2014141198/08A priority patent/RU2595764C2/ru
Priority to US14/395,831 priority patent/US9584332B2/en
Publication of WO2013189364A1 publication Critical patent/WO2013189364A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1881Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with schedule organisation, e.g. priority, sequence management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/15Flow control; Congestion control in relation to multipoint traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method and apparatus for processing a message. Background technique
  • the system first needs to identify the single and multicast attributes of the packet, and then internally replicate the multicast message, and manage the single and multicast messages according to the rules set by the user. Dispatching and dispatching, and finally the editing of all the messages in the exit of the system.
  • the packet fragmentation technology can effectively reduce the delay and jitter of the data, and improve the cache utilization, which is currently targeted by the packet-switched network processing device.
  • the implementation mechanism is that the entire cache space is divided into n storage units according to a fixed size. Whenever a message is input, the buffer space is allocated according to the packet size.
  • a space is directly allocated.
  • a space is directly allocated.
  • the corresponding scheduling management for message output has a set of mechanism descriptors corresponding to the cache management. Its function is to generate a descriptor for each message already stored in the cache, and the descriptor records the pointer of the message in the cache space.
  • the descriptor is stored in the node space, and is also managed by linked list. Each descriptor occupies one node, allocates a node when enqueuing and stores it in the corresponding queue according to user-defined rules. When the queue is dequeued, the node is reclaimed, and the descriptor is sent to the cache space management, and the pointer is extracted through the cache space.
  • Package entity When the queue is dequeued, the node is reclaimed, and the descriptor is sent to the cache space management, and the pointer is extracted through the cache space.
  • the storage space of the package entity can correspond to the node space of the descriptor, so that the two sets of management mechanisms can be combined into one set, but the current contradiction is For the processing of multicast packets. Since multicast is a descriptor for the message After multiple copies, and finally mapped to different queues, then a multicast message occupies multiple nodes, and the package entity has only one copy, so there is a processing flow shown in Figure 1. For multiple nodes of the same multicast message, the storage space pointer of the descriptor points to the storage address of the packet list, and the multicast forwarding is implemented. It can also be seen that there are two sets of management mechanisms in the existing technical solutions, the overhead and management of the two sets of linked lists of the package and the queue, the scale is large, the links are complicated, and the cost of maintenance and management is obviously increased. Summary of the invention
  • the technical problem to be solved by the present invention is to provide a method and a device for processing a message, which can realize unified storage of single and multicast messages, and the descriptor link table corresponds to a packet entity cache resource, which significantly reduces the management of single and multicast messages. Overhead to improve node aggregation capabilities.
  • the present invention provides a method for processing a message, including: allocating a node to an input message in a buffer space, storing the message, and using a location corresponding to the buffer space as a descriptor of the message Index information;
  • the descriptor information of the message and the node information of the message are framing and stored in a point list.
  • the foregoing method further has the following feature: the buffer space allocates a node for the input message, and the storing the message includes:
  • the foregoing method further has the following features: after applying for a node to the cached free list, the method includes:
  • Performing enqueue storage on the node if the node adjacent to the same queue is a multicast attribute, insert an empty node between the adjacent nodes, and the multicast pointer of the previous node points to the address of the empty node.
  • the multicast pointer of the empty node points to the next node.
  • the foregoing method further has the following features: the mapping corresponding to the node according to the node After the cache address is stored in the external storage unit, it also includes:
  • the linked list pointer of the node After receiving the dequeue command, the linked list pointer of the node is obtained according to the dequeue command, and the data corresponding to the node is read according to the mapping relationship between the linked list pointer and the external storage unit.
  • the descriptor information includes one or more of the following information:
  • Unicast information of the packet Unicast information of the packet, multicast information of the packet, index information, the end attribute of the current node, the number of valid bytes, and the queue number.
  • the present invention further provides an apparatus for processing a message, comprising: a first module, configured to: allocate a node for an input message in a cache space, store the message, and position the cache space Index information as a descriptor of the message;
  • a second module configured to: extract descriptor information of the packet
  • the third module is configured to: framing the descriptor information of the packet and the node information of the packet, and storing the information in a node linked list.
  • the first module includes:
  • the first unit is configured to: apply for a node to the cached free list, and maintain the corresponding linked list pointer;
  • the second unit is configured to: store the 3 ⁇ 4 ⁇ text in an external storage unit according to a cache address corresponding to the node.
  • the above device also has the following features:
  • the first unit is configured to: after applying a node to the cached free list, perform the enqueue storage on the node, and if the node adjacent to the same queue is a multicast attribute, insert the node between the adjacent nodes.
  • the device further has the following features:
  • the device further includes:
  • the fourth module is configured to: after receiving the dequeue command, acquire a linked list pointer of the node according to the dequeue command, and read data corresponding to the node according to the mapping relationship between the linked list pointer and the external storage unit.
  • the above device further has the following features:
  • the descriptor information includes one or more of the following information:
  • Unicast information of the packet Unicast information of the packet, multicast information of the packet, index information, the end attribute of the current node, the number of valid bytes, and the queue number.
  • the embodiments of the present invention provide a method and an apparatus for processing a message, which are no longer separated by two sets of management mechanisms due to the replication of the multicast node, and the single multicast attribute and the corresponding linked list pointer in one node information.
  • Management and maintenance ensure that the queue nodes and the cache space can be in one-to-one correspondence. The entire management only needs to maintain a set of linked list allocation and recycling mechanism.
  • a node is used to maintain a unicast and multicast descriptor list, so as to achieve unified storage of single and multicast messages, a descriptor list and a packet entity.
  • the management overhead of single and multicast messages is significantly reduced, and the versatility is better.
  • Figure 1 is a structural diagram of a commonly used single and multicast unified cache
  • FIG. 2 is a flowchart of a method for processing a message according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of an apparatus for processing a message according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a node aggregation operation of unicast and multicast according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a node aggregation operation of multicast link multicast according to an embodiment of the present invention
  • Schematic diagram of node aggregation operation
  • FIG. 7 is a schematic diagram of a pointer operation of a queue linked list according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of operation of node information of a linked list according to an embodiment of the present invention. Preferred embodiment of the invention
  • packet entity management In the current packet-switched network equipment, for the unified processing of supporting single and multicast service packets, the more common practice is to divide into two parts: packet entity management and descriptor management. It is used to allocate and reclaim fixed-size buffer space, store message entities, and maintain complete information of packets. Descriptor management is used for unified enqueue and allocation of unicast after multicast opening, allocation, dequeue scheduling and reclaiming node space, and finally instructing packet entity management to output messages.
  • the system uses two sets of management mechanisms for compatible design on single and multicast processing, so can we develop a method that uses a set of management mechanisms to be compatible with the above design requirements. Only a simple analysis is not feasible, because we will find that a multicast packet into a different queue will have different lower pointers. If the physical space of the package is simply mapped to the descriptor space, the implementation overhead will not be tolerated. However, if the package entity space and the descriptor space can be aggregated, the storage information of the original two spaces can be integrated, so that a management mechanism can be adopted, which is not only a process but also a more general structure.
  • the invention designs a method for processing a message, so as to improve the node aggregation capability, firstly allocates the buffer space of the input message, stores the message, and then participates in the report as the index information of the descriptor corresponding to the location of the buffer space.
  • the enqueue of the text, in the process of enrollment, according to the single, multicast attributes of the message, the end of the message, the connection information of the queue members, etc., the nodes are aggregated, and the storage space and descriptor space of the message are aggregated into one set.
  • the management information corresponds to an access space, so that the single and multicast messages are managed by a node maintenance mechanism with high aggregation capability.
  • FIG. 2 is a flowchart of a method for processing a packet according to an embodiment of the present invention. As shown in FIG. 2, the method in this embodiment includes:
  • Sl l in the cache space, allocates a node to the input message, stores the message, and uses the location corresponding to the cache space as the index information of the descriptor of the message;
  • Frame descriptor information of the packet and node information of the packet are stored in a node linked list.
  • the step S11 allocates a node to the input message in the cache space, and the storing the message includes: applying a node to the cached free list, and maintaining the corresponding linked list pointer;
  • the text is stored in an external storage unit according to a cache address corresponding to the node.
  • the method After applying for a node to the cached free list, the method includes: Performing enqueue storage on the node, if the node adjacent to the same queue is a multicast attribute, insert an empty node between the adjacent nodes, and the multicast pointer of the previous node points to the address of the empty node. The multicast pointer of the empty node points to the next node.
  • the method further includes:
  • the linked list pointer of the node After receiving the dequeue command, the linked list pointer of the node is obtained according to the dequeue command, and the data corresponding to the node is read according to the mapping relationship between the linked list pointer and the external storage unit.
  • FIG. 3 is a schematic diagram of an apparatus for processing a message according to an embodiment of the present invention. As shown in FIG. 3, the apparatus of this embodiment includes:
  • a cache space allocation module (corresponding to the first module), configured to allocate a node for the input message in the cache space, store the message, and use the location corresponding to the cache space as the index information of the descriptor of the message;
  • a descriptor extracting module (corresponding to a second module) for extracting descriptor information of the packet; a node information aggregation module (corresponding to a third module), configured to decode descriptor information of the packet
  • the node information of the message is framing and stored in a node linked list.
  • the first module includes:
  • the first unit is configured to apply for a node to the cached free list, and maintain the corresponding linked list pointer.
  • the second unit is configured to store the packet in the external storage unit according to the cache address corresponding to the node.
  • the first unit after applying for a node to the cached free list, is further configured to perform the enqueue storage on the node, and if the node adjacent to the same queue is a multicast attribute, the adjacent node is in the adjacent node. An empty node is inserted, the multicast pointer of the previous node points to the address of the empty node, and the multicast pointer of the empty node points to the next node.
  • the method may further include:
  • a queue outbound management module (equivalent to the fourth module), configured to: after receiving the dequeue command, acquire a linked list pointer of the node according to the exporter command, and read according to the mapping relationship between the linked list pointer and the external storage unit Take the data corresponding to the node.
  • the descriptor information includes one or more of the following information:
  • Unicast information of the packet Unicast information of the packet, multicast information of the packet, index information, the end attribute of the current node, the number of valid bytes, and the queue number.
  • the device for processing a message mainly includes the following four parts: a cache space allocation module, a descriptor extraction module, a node information aggregation module, and a queue out packet management module. These four parts together complete the storage, parsing, marking, and extraction of the message, which ensures a very streamlined processing flow for the processing of single and multicast messages. among them,
  • the cache space allocation module is responsible for requesting the node based on the cached free list when the message is input, and then maintaining the corresponding linked list pointer, and storing the message in the external storage unit according to the cache address corresponding to the node.
  • the node extracting module sends the node index of the external storage unit, and carries the single and multicast information of the packet represented by the external storage unit, and the packet of the current node.
  • the tail attribute, as well as the number of valid bytes, and the descriptor information such as the queue number, need to ensure that the single and multicast information are consistent for multiple nodes of the same message.
  • the descriptor extraction module is responsible for receiving the descriptor information from the cache space allocation module, and completing the information storage of the corresponding node.
  • the aligned queue number, message attribute and other descriptor information are assembled according to the set bit field, and then the effective signal is sent to the information aggregation module of the node.
  • the information aggregation module of the node is responsible for the linked list processing of the node aggregation, including the node's inbound processing and the node's outgoing processing.
  • the sample When receiving the information valid indication of the descriptor extraction module, the sample stores the aligned descriptor, extracts the node information of the message allocated in the buffer space, and then packs and framing the information into the node into the chain fifo (advanced) First-out) .
  • the dequeue command from the queue outbound management module is stored in the node outbound fifo.
  • the enqueue and dequeue fifo are scheduled under the polling mechanism of the fixed time slot, read fifo, parse the command, extract the queue number and descriptor, maintain the head and tail pointer, perform read and write protection of the incoming and outgoing operations, etc. until Finally, the member information of the linked list node space is maintained.
  • the improvement of the aggregation ability of the node is mainly reflected in the maintenance of the queue list pointer and the node information.
  • the queue outbound management module is responsible for dequeuing according to the rules of the user, controlling the outbound chain of each queue and the output control of the corresponding data of the node.
  • the queue The outbound management module sends the dequeue enablement and queue number to the node information aggregation module, and after receiving and processing the operation command, finally receives the outbound packet pointer information sent by the node aggregation module.
  • the external memory_controller storage controller
  • the data corresponding to the node is read and outputted, and the store-and-forward is completed.
  • the cache management mode of the embodiment adopts a corresponding manner between the node and the storage unit.
  • the processing manner is not difficult to understand, that is, the corresponding node is allocated according to the manner in which the storage unit is allocated.
  • the queue X in Figure 4 is taken as an example.
  • the nodes occupied by the queue and the cache space in the figure correspond to the way in which the unicast link information and the multicast link information are maintained for each node.
  • the first two nodes of the queue are unicast nodes.
  • For the first node only the unicast link information needs to be maintained, including the lower pointer and the tail attribute.
  • the next node is a multicast attribute, so it needs to maintain its multicast link information first, until a complete multicast packet tail node, in which all multicast nodes only maintain multicast link information. Then, the pointer information of the unicast node after the queue multicast packet is maintained in the unicast link information bit field of the second node.
  • any unicast node if its next node is multicast, it is maintained in its multicast information bit field. If the next node at the end of the multicast packet is unicast, it maintains its multicast packet header. Unicast bitfield maintenance. If the multicast packet is also copied to the queue y, as shown in Figure 4, the node is also maintained in the above manner, and the next-m of the nodes belonging to the two queues all point to the same multicast packet head node. The tail node of the multicast packet indicates the end of the multicast packet, and the next node of the respective queue is found through the next-u of the source node.
  • next hop is unicast, then the next unicast node address pointed to by next-u, if the next hop is still multicast, then the pointer is still an empty node, the operation is the same as above.
  • the two sets of management mechanisms used in the original solution are merged into one set, and the way of improving the aggregation capability of the nodes significantly reduces the processing flow, reduces the resource overhead, and is better.
  • Adaptation system in the real-time running process the proportion of nodes occupied by single and multicast. And after the nodes are aggregated, the minimum granularity of the scheduled queue members is changed from one packet to one node size, which is very effective for the packet-switched network with variable length packets, which improves the performance of jitter reduction.
  • each node corresponds to an external storage address.
  • the system sequentially inputs unicast short packets stored in B00, BIO. B20, and maps to queue 1. Enter the unicast packets in sequence and store them in B08, B18, and B28, and map them to Queue 2. Then, the input multicast long packet is stored in B31, B32, and the packet is mapped to queue 1 and queue 2 respectively after being multicast, and then the input unicast short packet is stored in B40, mapped to queue 1, and then input into multicast short packet storage. In B51, it maps to queue 1 and queue 2.
  • queue 2 has applied for an empty node B46 to maintain the linked list of nodes due to two consecutive multicast packets. Then a multicast long packet is mapped to queue 1 and queue 2 respectively, and the packet first occupies B82. At this time, both queues 1 and 2 need to apply for an empty node B71 and B65 until the subsequent unicast B90 is mapped to the queue. 1, B98 maps to queue 2.
  • the above situation traverses the various ways in which the unicast packet and the multicast packet are connected to each other.
  • the queue 1 is taken as an example to describe step by step the aggregation operation of the single and multicast information of each node of the queue and the queue head and tail pointer operations. Have the same steps.
  • the on-chip or off-chip is selected as required, except that the off-chip memory may have a byte mask operation due to bit width and bandwidth limitations.
  • queue 1 The enqueue operation of queue 1 is shown in Figure 7.
  • the operation of the linked list node is shown in Figure 8. In the initial state, the queue is empty.
  • the unicast node B00 is enqueued: the unicast packet first pointer (HU), the unicast tail pointer (TU) is the node index 00, the first and last pointer valid tags HPOS, TPOS is 0, and the current head and tail pointer is the unicast bit field. effective.
  • Unicast node B10 enqueue: The first pointer is unchanged, the update tail pointer TU is 10, and the pointer valid flag is unchanged. Maintain a pointer to the unicast domain of the node storage unit, eopu (the tail tag of the unicast packet), descriptor information, and so on.
  • Unicast node B20 enqueue: The first pointer is unchanged, the updated tail pointer (TU) is 20, and the pointer valid flag is unchanged. Maintain node storage unit unicast domain information.
  • Multicast node B31 enqueue: The tail pointer field TU is 20 unchanged, TM (multicast packet tail pointer) is updated to 31, and tail pointer valid flag (TPOS) is updated to 1, indicating that the tail of the current queue is a multicast node, and Maintain pointers and descriptor information of the node storage unit multicast domain, and so on. However, the current node is not a multicast tail, indicating that a multicast packet has not ended.
  • Multicast node B32 enqueue: TM update to 32, tail pointer valid flag TPOS is still 1, current tail node, indicating the end of a multicast packet, while maintaining the pointer of the node storage unit multicast domain, eopm (multicast packet Tail mark) and descriptor information.
  • the unicast node B40 is enqueued: the tail pointer field TM is unchanged, the TU is updated to 40, and the update TPOS is still 0.
  • the unicast domain information directed to the multicast node B31 needs to be maintained, as shown in the address field 20 of FIG. This address requires a unicast bit field for the second operation.
  • Multicast node B51 enqueue The tail pointer field TU is 40 unchanged, TM is updated to 51, the tail pointer is effectively marked TPOS is updated to 1, and the pointer of the node storage unit multicast domain, eopm and descriptor information are maintained.
  • Multicast node B82 enqueue: At this time, due to two consecutive multicast packets, an empty node B71 needs to be inserted, the null tag TM is updated to 82, and the tail pointer valid tag TPOS is set to 1, maintaining the node storage unit multicast domain. Pointer, descriptor information, etc.
  • Multicast node B84 enqueue: The last multicast node is not the tail node, TM is updated to S4, and the tail pointer valid flag TPOS is 1.
  • the current node is a tail node, which maintains a pointer to the node storage unit multicast domain, eopm and descriptor information.
  • the unicast node B90 is enqueued: the tail pointer field TM is unchanged, the TU is updated to 90, the update TPOS is still 0, and the unicast domain information pointing to the multicast node B82 needs to be maintained, and the pointer of the node storage unit multicast domain is maintained, eopu And descriptor information, etc.
  • Node B00 dequeue The tail pointer field is unchanged, HU is updated to 10, and HPOS pointer is 0.
  • Node B10 is dequeued: the tail pointer field is unchanged, the HU is updated to 20, and the HPOS pointer is 0.
  • Node B20 dequeue The tail pointer field is unchanged, the linked list node information is read, HU is updated to 40, HM is updated to 31, and HPOS pointer is updated to 1.
  • Node B31 dequeue The tail pointer field is unchanged, the link node information is read, HU is unchanged, HM is updated to 32, and the HPOS pointer remains unchanged.
  • Node B32 dequeue the tail pointer field is unchanged, the link node information is read, the node is the multicast tail, the HU is unchanged, the HM remains unchanged, and the HPOS pointer is updated to 0.
  • Node B40 dequeue Read the linked list node information, HU is updated to 71, HM is updated to 51, and HPOS pointer is updated to 1.
  • Node B51 dequeue The ept (empty node tag) field of the analysis descriptor indicates that the node is an empty node, and eopm is valid, indicating that the next node is still a multicast packet and the HPOS pointer is updated to 0.
  • Node B82 dequeue Read the linked list node information, HU is updated to 90, HM is updated to 83, HPOS pointer is updated to 1, and the current node is not tailed.
  • Node B83 is dequeued: Read the linked list node information, HU is updated to 90, HM is updated to 84, the current node is the end of the package, and the HPOS pointer is updated to 0.
  • Node B90 dequeue Compare all the bit fields of the current head and tail pointers to be completely equal, indicating that the queue is empty. The enqueue and dequeue operations of the queue and multicast packets are completed.
  • the embodiments of the present invention provide a method and an apparatus for processing a message, which are no longer separated by two sets of management mechanisms due to the replication of the multicast node, and the single multicast attribute and the corresponding linked list pointer in one node information.
  • Management and maintenance ensure that the queue node and the cache space can correspond to each other. The entire management only needs to maintain a set of linked list allocation and recycling mechanism.
  • a node is used to maintain a unicast and multicast descriptor list, so as to achieve unified storage of single and multicast messages, a descriptor list and a packet entity.
  • the management overhead of single and multicast messages is significantly reduced, and the versatility is better.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种处理报文的方法及装置,该方法包括:在缓存空间为输入报文分配一节点,存储该报文,将该缓存空间对应的位置作为该报文的描述符的索引信息;提取所述报文的描述符信息;将所述报文的描述符信息和所述报文的节点信息进行组帧,存储于一节点链表中。上述方案可以实现单、多播报文的统一存储,描述符链表与包实体缓存资源相对应,显著降低单、多播报文的管理开销,以提高节点汇聚能力。

Description

一种处理报文的方法和装置 技术领域
本发明涉及通信技术领域, 特别是涉及一种处理报文的方法和装置。 背景技术
目前的分组交换网络中, 由于业务处理的需求, 系统首先需要识别报文 的单、 多播属性, 然后在内部针对多播报文进行复制, 单、 多播报文进行管 理, 按照用户设定的规则进行调度出队, 最终在系统的出口体现出报文所有 的编辑情况。 在继续进行分析之前, 有必要提及一下在可变长的分组交换网 络中, 数据包分片技术能够有效降低数据的时延和抖动, 提高缓存利用率, 是目前包交换网络处理设备中针对报文进行緩存管理的一个重要机制。 其实 现机理就是将整个緩存空间按照固定大小划分出 n个存储单元, 每当有报文 输入时, 根据报文大小进行緩存空间的分配, 对于小于等于一个存储单元的 报文, 直接分配一个空间, 而对于较长的报文来说, 可能需要分配多个存储 单元, 同时需要记录这么多个存储单元属于同一个报文, 设计上一般采用链 表来管理。这样对于整个緩存空间,包输入时釆用链表进行緩存空间的分配, 包输出时釆用链表进行缓存空间的回收。
相应的对于报文输出的调度管理有一套与緩存管理相对应的机制 描 述符的管理。 其作用是将每个已经存储在緩存里的报文生成一个描述符, 描 述符里记录该报文在緩存空间的指针。 对应于包实体的緩存空间, 描述符存 储于节点空间, 同样是采用链表管理。 每个描述符占据一个节点, 入队时分 配一个节点并按照用户定义的规则将存储于相应的队列,出队时回收该节点, 同时将描述符送至緩存空间管理, 通过緩存空间的指针提取包实体。
其实我们可以分析下, 假设系统只支持单播报文业务的话, 包实体的存 储空间可以和描述符的节点空间——对应, 这样就可以将两套管理机制合并 为一套, 但目前的矛盾是对于多播包的处理。 由于多播是对报文的描述符进 行了多次的复制, 并最终映射到不同的队列, 那么一个多播报文就占据了多 个节点, 而包实体只有一份, 所以就有了图 1所示的处理流程。 对于同一个 多播报文的多个节点, 其描述符的存储空间指针都指向包链表的存储地址, 就实现了多播报文的存储转发。 也可以看出现有的技术方案中存在两套管理 机制, 包和队列两套链表的开销和管理, 规模庞大, 环节复杂, 明显增加维 护和管理的成本。 发明内容
本发明要解决的技术问题是提供一种处理报文的方法及装置, 可以实现 单、 多播报文的统一存储, 描述符链表与包实体缓存资源相对应, 显著降低 单、 多播报文的管理开销, 以提高节点汇聚能力。
为了解决上述技术问题, 本发明提供了一种处理报文的方法, 包括: 在緩存空间为输入报文分配一节点, 存储该报文, 将该缓存空间对应的 位置作为该报文的描述符的索引信息;
提取所述报文的描述符信息;
将所述报文的描述符信息和所述报文的节点信息进行组帧, 存储于一节 点链表中。
优选地, 上述方法还具有下面特点: 所述在緩存空间为输入报文分配一 节点, 存储该报文包括:
向緩存的空闲链表申请一节点, 维护对应的链表指针;
将所述 ·ί艮文按照所述节点对应的緩存地址存入外部存储单元。
优选地, 上述方法还具有下面特点: 所述向緩存的空闲链表申请一节点 后, 包括:
对所述节点进行入队存储, 如同一队列相邻的节点为多播属性, 则在该 相邻的节点之间插入一空节点,前一节点的多播指针指向所述空节点的地址, 所述空节点的多播指针指向后一节点。
优选地, 上述方法还具有下面特点: 所述将所述 4艮文按照所述节点对应 的緩存地址存入外部存储单元之后, 还包括:
接收到出队命令后, 根据出队命令获取所述节点的链表指针, 根据所述 链表指针与外部存储单元的映射关系, 读取该节点对应的数据。
优选地, 上述方法还具有下面特点: 描述符信息包括以下信息中的一种 或多种:
报文的单播信息、报文的多播信息、索引信息、当前节点的报文尾属性、 有效字节数和队列号。
为了解决上述问题, 本发明还提供了一种处理报文的装置, 包括: 第一模块, 设置为: 在緩存空间为输入报文分配一节点, 存储该报文, 将该緩存空间对应的位置作为该报文的描述符的索引信息;
第二模块, 设置为: 提取所述报文的描述符信息;
第三模块, 设置为: 将所述报文的描述符信息和所述报文的节点信息进 行组帧, 存储于一节点链表中。
优选地, 上述装置还具有下面特点: 所述第一模块, 包括:
第一单元, 设置为: 向緩存的空闲链表申请一节点, 维护对应的链表指 针;
第二单元, 设置为: 将所述 ¾ ^文按照所述节点对应的缓存地址存入外部 存储单元。
优选地, 上述装置还具有下面特点:
所述第一单元, 设置为: 向缓存的空闲链表申请一节点后, 对所述节点 进行入队存储, 如同一队列相邻的节点为多播属性, 则在该相邻的节点之间 插入一空节点, 前一节点的多播指针指向所述空节点的地址, 所述空节点的 多播指针指向后一节点。
优选地, 上述装置还具有下面特点: 所述装置还包括:
第四模块, 设置为: 接收到出队命令后, 根据出队命令获取所述节点的 链表指针, 根据所述链表指针与外部存储单元的映射关系, 读取该节点对应 的数据。 优选地, 上述装置还具有下面特点: 所述描述符信息包括以下信息中的 一种或多种:
报文的单播信息、报文的多播信息、索引信息、当前节点的报文尾属性、 有效字节数和队列号。
综上, 本发明实施例提供一种处理报文的方法及装置, 不再由于多播节 点复制的原因, 分开两套的管理机制, 在一个节点信息中将单多播属性及对 应的链表指针进行管理和维护, 保证了队列节点与缓存空间能够一一对应, 整个管理只需维护一套链表分配与回收的机制。在系统输入侧不可预知的单、 多播报文场景下,利用一个节点分别维护单播和多播两个维度的描述符链表, 从而实现单、 多播报文的统一存储, 描述符链表与包实体緩存资源相对应, 显著降低单、 多播报文的管理开销, 且通用性更好。 附图概述
图 1普遍釆用的单、 多播统一緩存的结构图;
图 2为本发明实施例的处理报文的方法的流程图;
图 3为本发明实施例的处理报文的装置示意图;
图 4为本发明实施例的单播与多播的节点汇聚操作的示意图; 图 5为本发明实施例的多播链接多播的节点汇聚操作的示意图; 图 6为本发明实施例的各种节点汇聚操作示意图;
图 7为本发明实施例的队列链表的指针操作示意图;
图 8为本发明实施例的链表的节点信息操作示意图。 本发明的较佳实施方式
下文中将结合附图对本发明的实施例进行详细说明。 需要说明的是, 在 不冲突的情况下, 本申请中的实施例及实施例中的特征可以相互任意组合。
在目前的分组交换网络设备中,对于支持单、多播业务报文的统一处理, 比较普遍釆用的做法是划分为包实体管理和描述符管理两个部分, 包实体管 理用于分配和回收固定大小的緩存空间, 存储报文实体, 维护报文的完整信 息等。 描述符管理用于多播打开后与单播的统一入队, 分配, 出队调度和回 收节点空间, 最终指示包实体管理将报文输出。
尽管这两套处理机制能够正确进行单、 多播报文的统一管理, 但是会带 来系统的处理流程复杂, 实现开销较大等问题。 根据之前的讨论, 系统釆用 两套的管理机制是为了单、 多播处理上的兼容设计, 那么能不能研究出一种 方法使用一套管理机制来兼容上述设计需求。 仅仅浅显的分析是不行的, 因 为我们会发现一个多播包入不同的队列会有不同的下指针, 如果简单的将包 实体空间与描述符空间一一对应, 实现上的开销将不能承受。 但是如果能够 将包实体空间与描述符空间进行节点汇聚, 将原来两个空间的存储信息进行 整合,这样就能够采用一套管理机制,不仅流程上筒单,且结构也更为通用。
本发明设计了一种处理报文的方法, 以提高节点汇聚能力, 首先对输入 报文进行緩存空间的分配、 报文存储, 然后将该緩存空间对应的位置作为描 述符的索引信息参与到报文的入队, 入队过程中根据报文的单、 多播属性, 报文结束标记, 队列成员的连接信息等属性进行节点的汇聚, 将报文的存储 空间和描述符空间汇聚为一套管理信息, 对应一个访问空间, 这样就将单、 多播的报文釆用了一种具有高汇聚能力的节点维护机制进行管理。
图 2为本发明实施例的处理报文的方法的流程图, 如图 2所示, 本实施 例的方法包括:
Sl l、 在緩存空间为输入报文分配一节点, 存储该报文, 将该緩存空间 对应的位置作为该报文的描述符的索引信息;
512、 提取所述报文的描述符信息;
513、 将所述报文的描述符信息和所述报文的节点信息进行组帧, 存储 于一节点链表中。
其中,步骤 S 11中在缓存空间为输入报文分配一节点,存储该报文包括: 向緩存的空闲链表申请一节点, 维护对应的链表指针;
将所述 文按照所述节点对应的緩存地址存入外部存储单元。
其中, 向缓存的空闲链表申请一节点后, 包括: 对所述节点进行入队存储, 如同一队列相邻的节点为多播属性, 则在该 相邻的节点之间插入一空节点,前一节点的多播指针指向所述空节点的地址, 所述空节点的多播指针指向后一节点。
其中,将所述 ·ί艮文按照所述节点对应的緩存地址存入外部存储单元之后, 还包括:
接收到出队命令后, 根据出队命令获取所述节点的链表指针, 根据所述 链表指针与外部存储单元的映射关系, 读取该节点对应的数据。
图 3为本发明实施例的处理报文的装置的示意图, 如图 3所示, 本实施 例的装置包括:
缓存空间分配模块(相当于第一模块) , 用于在緩存空间为输入报文分 配一节点, 存储该报文, 将该緩存空间对应的位置作为该报文的描述符的索 引信息;
描述符提取模块(相当于第二模块),用于提取所述报文的描述符信息; 节点信息汇聚模块(相当于第三模块) , 用于将所述报文的描述符信息 和所述报文的节点信息进行组帧, 存储于一节点链表中。
其中, 所述第一模块, 包括:
第一单元, 用于向緩存的空闲链表申请一节点, 维护对应的链表指针; 第二单元, 用于将所述报文按照所述节点对应的緩存地址存入外部存储 单元。
其中, 所述第一单元, 向缓存的空闲链表申请一节点后还用于, 对所述 节点进行入队存储, 如同一队列相邻的节点为多播属性, 则在该相邻的节点 之间插入一空节点, 前一节点的多播指针指向所述空节点的地址, 所述空节 点的多播指针指向后一节点。
在一优选实施例中, 还可以包括:
队列出包管理模块(相当于第四模块) , 用于接收到出队命令后, 根据 出 Ρ人命令获取所述节点的链表指针, 根据所述链表指针与外部存储单元的映 射关系, 读取该节点对应的数据。 其中, 描述符信息包括以下信息中的一种或多种:
报文的单播信息、报文的多播信息、索引信息、当前节点的报文尾属性、 有效字节数和队列号。
本发明实施例所述的处理报文的装置主要包括以下四个部分: 緩存空间 分配模块, 描述符的提取模块, 节点的信息汇聚模块和队列出包管理模块。 这四个部分共同完成了报文的存储、 解析、 标记、 提取等, 保证了对于单、 多播报文的处理使用非常精简的处理流程。 其中,
緩存空间分配模块, 负责在报文输入时, 首先去基于緩存的空闲链表申 请节点, 然后维护对应的链表指针, 同时将报文按照节点对应的緩存地址存 入外部存储单元。 每完成一个节点对应外部存储单元的存储操作, 就向描述 符提取模块发送该外部存储单元的节点索引, 同时携带该外部存储单元所代 表的报文的单、 多播信息, 当前节点的报文尾属性, 以及有效字节数, 和队 列号等描述符信息,对于同一个报文的多个节点需要确保单、多播信息一致。
描述符的提取模块, 负责接收緩存空间分配模块过来的描述符信息, 完 成对应节点的信息存储。 当接收到分配节点信息有效时, 同时将对齐过来的 队列号, 报文属性等描述符信息按照设定好的位域进行组装, 然后驱动有效 信号送至节点的信息汇聚模块。
节点的信息汇聚模块, 负责节点汇聚的链表处理, 包括节点的入链处理 和节点的出链处理。 当收到描述符提取模块的信息有效指示时, 采样存储对 齐的描述符, 同时提取该报文在缓存空间分配的节点信息, 然后将上述信息 打包、 组帧, 存储于节点入链 fifo (先进先出) 。 对应节点入链存储, 队列 出包管理模块过来的出队命令存储于节点出链 fifo。 入队与出队 fifo在固定 时隙的轮询的机制下调度, 读取 fifo , 解析命令, 提取队列号和描述符, 维 护首尾指针, 进行入链与出链操作的读写保护等, 直到最后维护链表节点空 间的成员信息。 其中, 节点的汇聚能力的提升主要体现在队列链表指针和节 点信息的维护, 通过分析描述符的属性维护链表指针和节点单、 多播链接信 息, 将原来包链表和节点链表大幅度的压缩和简化。
队列出包管理模块, 负责按照用户的规则进行出队调度, 控制每个队列 的节点出链和该节点对应数据的输出控制。 在一次出队请求操作中, 该队列 出包管理模块向节点信息汇聚模块发送出队使能和队列号, 经过操作命令的 选择、 处理, 最终收到节点汇聚模块发送过来的节点出包指针信息。 根据节 点指针与数据存储单元的映射关系, 驱动外部 memory_controller (存储控制 器) , 读取并输出该节点对应的数据, 完成存储转发。
本实施例的缓存管理方式采用节点与存储单元——对应的方式, 对于单 播包或者单播节点, 该处理方式不难理解, 就是按照分配存储单元的方式分 配对应节点即可。 但是对于需要支持单、 多播统一管理的话就需要考虑其它 的三种情况: 单播节点之后是多播, 多播节点之后是多播, 多播节点之后是 单播。 如果基于节点的管理能够实现这四种组合方式, 就完成了单、 多播的 统一管理, 其实质也就是要解决队列之间对于同一个多播包或者多播片的链 表管理问题。 接下来分别针对这几种情况进行说明。
殳设以图 4中的队列 X为例, 图中队列所占据的节点与緩存空间——对 应, 采取为每个节点维护单播链接信息和多播链接信息的处理方式。 该队列 的前两个节点为单播节点, 对于第一个节点只需要维护其单播链接信息, 包 括下指针, 包尾属性等。 对于第二个节点, 其下个节点为多播属性, 所以需 要首先维护其多播链接信息, 直到一个完整的多播包尾节点, 其间所有的多 播节点都只维护多播链接信息。 然后再回头在第二个节点的单播链接信息位 域维护该队列多播包后的单播节点的指针信息。 这样的话对于任意一个单播 节点如果其下一节点为多播, 那么在其多播信息位域维护, 如果多播包尾的 下一节点为单播, 就在其维护其多播包首的单播位域维护。 如果该多播包向 队列 y也复制了一份, 如图 4节点标识, 那么同样采用上述的方式维护, 隶 属于两个队列的节点的 next— m都指向同一个多播包首节点,该多播包的尾节 点指示多播包的结束, 在通过源节点的 next— u找到各自队列的下一个节点。
那么如果情况再复杂一点, 多播包后面仍然是多播包, 该如何处理? 首 先我们来分析下两个多播包相连的情况, 通过刚才的分析我们得知一个多播 包的链接操作完成后, 需要通过指向其包首的节点的信息寻找下指针, 对于 现在要讨论的情况, 如果下指针仍然是个多播节点, 那么势必要增加一个多 播的位域, 如果有连续的几个多播包, 那么指向第一个多播包首的节点就要 增加多个多播的位域, 如果按照这种方式设计, 存储的开销将不能承受且利 用率极低, 所以当有同一队列有两个相邻的多播包时需要插入空节点, 具体 操作示意图如图 5所示。 相比于图 4, 在队列 X和队列 y的多播包后面不是 单播包了, 而是多播包, 那么现在就需要分別插入一个空节点, 前一个多播 包的 next— u指向这个空节点的地址, 即空节点的索引, 而空节点的多播指针 next— m就是下一个多播包, 空节点的单播指针 next— u为下一跳。如果下一跳 为单播, 那么 next— u指向的一个实际存在的单播节点地址, 如果下一跳仍为 多播, 那么指向的仍然是个空节点, 操作同上。 采用了本发明实施例所述的方法, 将原来方案釆用的两套管理机制合并 为一套, 通过提升节点汇聚能力的方式, 显著的筒化了处理流程, 降低了资 源开销, 能够更好的适应系统实时运行过程中单、 多播所占节点比例的各种 情况。 且经过节点汇聚后, 被调度队列成员的最小颗粒度由一个包变为一个 节点大小, 这对于可变长包的分組交换网络来说, 对降低抖动的性能改善非 常有效。
为了更清楚地说明本发明的技术方案, 下面结合图 6、 图 7、 图 8和具体 实施例作进一步说明, 但不作为对本发明的限定。
假设当前拥有一块存储区域如图 6虚线网格所示, 釆用节点汇聚的方式 管理,每个节点和外部存储地址——对应。系统依次输入单播短包存储在 B00、 BIO. B20, 映射到队列 1。 依次输入单播包存储在 B08、 B18、 B28, 映射到 队列 2。 然后, 输入多播长包存储在 B31、 B32, 该包多播后分别映射到队列 1和队列 2, 然后输入单播短包存储在 B40, 映射到队列 1 , 之后再输入多播 短包存储在 B51 , 映射到队列 1和队列 2, 此时队列 2由于连续的两个多播 包, 所以申请了一个空节点 B46进行节点汇聚的链表维护。 之后又来了一个 多播长包分别映射到队列 1和队列 2 , 包首占用 B82, 此时对于队列 1和 2 都需要各自申请一个空节点 B71和 B65,直到后来的单播 B90映射到队列 1 , B98映射到队列 2。
上述情况遍历了单播包与多播包相互连接的各种方式, 下面以队列 1为 例, 分步说明该队列每个节点的单、 多播信息的汇聚操作和队列首尾指针操 作, 队列 2具有相同的操作步驟。 对于存储链表节点信息的存储器, 可以根 据需求选择片内或者片外, 不同之处在于片外存储器由于位宽和带宽的限制 可能要进行字节掩码操作。
队列 1的入队操作如图 7所示, 链表节点的操作如图 8所示, 初始状态 下, 队列为空。
单播节点 B00入队: 置单播包首指针(HU ) 、 单播包尾指针(TU )为 节点索引 00 , 置首尾指针有效标记 HPOS、 TPOS为 0 , 标识当前首尾指针为 单播位域有效。
单播节点 B10入队: 首指针不变, 更新尾指针 TU为 10 , 指针有效标记 不变。 维护节点存储单元单播域的指针, eopu (单播包的尾标记)和描述符 信息等。
单播节点 B20入队: 首指针不变, 更新尾指针(TU )为 20, 指针有效 标记不变。 维护节点存储单元单播域信息。
多播节点 B31入队: 尾指针域 TU为 20不变, TM (多播包尾指针)更 新为 31,尾指针有效标记( TPOS )更新为 1 ,指示当前队列的尾为多播节点 , 同时维护节点存储单元多播域的指针和描述符信息等。 但是当前节点不是多 播尾, 表示一个多播包没有结束。
多播节点 B32入队: TM更新为 32, 尾指针有效标记 TPOS仍为 1, 当 前为尾节点, 表示一个多播包结束, 同时维护节点存储单元多播域的指针, eopm (多播包的尾标记)和描述符信息等。
单播节点 B40入队: 尾指针域 TM不变, TU更新为 40, 更新 TPOS仍 为 0,需要维护指向多播节点 B31的单播域信息,如图 7的地址 20所示的位 域表示该地址需要第二次操作的单播位域。
多播节点 B51入队: 尾指针域 TU为 40不变, TM更新为 51 ,尾指针有 效标记 TPOS更新为 1, 同时维护节点存储单元多播域的指针, eopm和描述 符信息等。
多播节点 B82入队: 此时由于连续的两个多播包, 需要插入一个空节点 B71 , 置空标记 TM更新为 82, 保持尾指针有效标记 TPOS为 1, 维护节点 存储单元多播域的指针, 描述符信息等。 多播节点 B83入队: 上一个多播节点不是尾节点, TM更新为 83 , 保持 尾指针有效标记 TPOS为 1。
多播节点 B84入队: 上一个多播节点不是尾节点, TM更新为 S4 , 保持 尾指针有效标记 TPOS为 1。 当前节点为尾节点, 维护节点存储单元多播域 的指针, eopm和描述符信息等。
单播节点 B90入队: 尾指针域 TM不变, TU更新为 90 , 更新 TPOS仍 为 0, 需要维护指向多播节点 B82的单播域信息, 同时维护节点存储单元多 播域的指针, eopu和描述符信息等。
以上说明了在入队情况下的单、 多播节点汇聚操作, 接下来我们再看下 该队列的出队, 队列的指针操作如图 7所示, 链表节点的操作如图 8所示。
节点 B00出队: 尾指针域不变, HU更新为 10 , HPOS指针为 0不变。 节点 B10出队: 尾指针域不变, HU更新为 20 , HPOS指针为 0不变。 节点 B20出队: 尾指针域不变,读取链表节点信息, HU更新为 40, HM 更新为 31, HPOS指针更新为 1。
节点 B31 出队: 尾指针域不变, 读取链表节点信息, HU不变, HM更 新为 32, HPOS指针保持不变。
节点 B32出队:尾指针域不变,读取链表节点信息,该节点是多播尾片, HU不变, HM保持不变, HPOS指针更新为 0。
节点 B40出队:读取链表节点信息, HU更新为 71, HM更新为 51, HPOS 指针更新为 1。
节点 B51 出队: 分析描述符的 ept (空节点标记)域有效表明该节点为 空节点, eopm有效, 表明下一节点仍是多播包, HPOS指针更新为 0。
节点 B82出队:读取链表节点信息, HU更新为 90, HM更新为 83, HPOS 指针更新为 1, 当前节点非包尾。
节点 B83出队: 读取链表节点信息, HU更新为 90, HM更新为 84 , 当 前节点为包尾, HPOS指针更新为 0。
节点 B90出队:比较当前首尾指针所有位域完全相等,代表该队列出空。 队列单、 多播报文的入队和出队操作完成。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序 来指令相关硬件完成, 所述程序可以存储于计算机可读存储介质中, 如只读 存储器、 磁盘或光盘等。 可选地, 上述实施例的全部或部分步骤也可以使用 一个或多个集成电路来实现。 相应地, 上述实施例中的各模块 /单元可以采用 硬件的形式实现, 也可以采用软件功能模块的形式实现。 本发明不限制于任 何特定形式的硬件和软件的结合。 以上仅为本发明的优选实施例, 当然, 本发明还可有其他多种实施例, 在不背离本发明精神及其实质的情况下, 熟悉本领域的技术人员当可根据本 发明作出各种相应的改变和变形, 但这些相应的改变和变形都应属于本发明 所附的权利要求的保护范围。 工业实用性
综上, 本发明实施例提供一种处理报文的方法及装置, 不再由于多播节 点复制的原因, 分开两套的管理机制, 在一个节点信息中将单多播属性及对 应的链表指针进行管理和维护, 保证了队列节点与緩存空间能够——对应, 整个管理只需维护一套链表分配与回收的机制。在系统输入侧不可预知的单、 多播报文场景下,利用一个节点分別维护单播和多播两个维度的描述符链表, 从而实现单、 多播报文的统一存储, 描述符链表与包实体緩存资源相对应, 显著降低单、 多播报文的管理开销, 且通用性更好。

Claims

权 利 要 求 书
1、 一种处理报文的方法, 包括:
在緩存空间为输入报文分配一节点, 存储该报文, 将该緩存空间对应的 位置作为该报文的描述符的索引信息;
提取所述报文的描述符信息;
将所述报文的描述符信息和所述报文的节点信息进行组帧, 存储于一节 点链表中。
2、如权利要求 1所述的方法, 其中, 所述在緩存空间为输入报文分配一 节点, 存储该报文包括:
向緩存的空闲链表申倩一节点, 维护对应的链表指针;
将所述艮文按照所述节点对应的緩存地址存入外部存储单元。
3、如权利要求 2所述的方法, 其中, 所述向緩存的空闲链表申请一节点 后, 包括:
对所述节点进行入队存储, 如同一队列相郐的节点为多播属性, 则在该 相邻的节点之间插入一空节点,前一节点的多播指针指向所述空节点的地址, 所述空节点的多播指针指向后一节点。
4、如权利要求 2所述的方法, 其中, 所述将所述报文按照所述节点对应 的緩存地址存入外部存储单元之后, 还包括:
接收到出队命令后, 根据出队命令获取所述节点的链表指针, 根据所述 链表指针与外部存储单元的映射关系, 读取该节点对应的数据。
5、 如权利要求 1-4任一项所述的方法, 其中, 所述描述符信息包括以下 信息中的一种或多种:
报文的单播信息、报文的多播信息、索引信息、当前节点的报文尾属性、 有效字节数和队列号。
6、 一种处理艮文的装置, 包括:
第一模块, 设置为: 在緩存空间为输入报文分配一节点, 存储该报文, 将该緩存空间对应的位置作为该报文的描述符的索引信息; 第二模块, 设置为: 提取所述报文的描述符信息;
第三模块, 设置为: 将所述报文的描述符信息和所述报文的节点信息进 行組帧, 存储于一节点链表中。
7、 如权利要求 6所述的装置, 其中, 所述第一模块, 包括: 第一单元, 设置为: 向緩存的空闲链表申请一节点, 维护对应的链表指 针;
第二单元, 设置为: 将所述报文按照所述节点对应的緩存地址存入外部 存储单元。
8、 如权利要求 7所述的装置, 其中,
所述第一单元, 设置为: 向緩存的空闲链表申请一节点后, 对所述节点 进行入队存储, 如同一队列相邻的节点为多播属性, 则在该相邻的节点之间 插入一空节点, 前一节点的多播指针指向所述空节点的地址, 所述空节点的 多播指针指向后一节点。
9、 如权利要求 7所述的装置, 其中, 所述装置还包括:
第四模块, 设置为: 接收到出队命令后, 根据出队命令获取所述节点的 链表指针, 根据所述链表指针与外部存储单元的映射关系, 读取该节点对应 的数据。
10、 如权利要求 6-9任一项所述的装置, 其中, 所述描述符信息包括以 下信息中的一种或多种:
报文的单播信息、报文的多播信息、索引信息、当前节点的报文尾属性、 有效字节数和队列号。
PCT/CN2013/081778 2012-10-12 2013-08-19 一种处理报文的方法和装置 WO2013189364A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP13807242.6A EP2830269B1 (en) 2012-10-12 2013-08-19 Message processing method and device
ES13807242.6T ES2684559T3 (es) 2012-10-12 2013-08-19 Procedimiento y dispositivo de procesamiento de mensajes
JP2014560246A JP5892500B2 (ja) 2012-10-12 2013-08-19 メッセージ処理方法及び装置
RU2014141198/08A RU2595764C2 (ru) 2012-10-12 2013-08-19 Способ и устройство обработки сообщений
US14/395,831 US9584332B2 (en) 2012-10-12 2013-08-19 Message processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210387704.XA CN103731368B (zh) 2012-10-12 2012-10-12 一种处理报文的方法和装置
CN201210387704.X 2012-10-12

Publications (1)

Publication Number Publication Date
WO2013189364A1 true WO2013189364A1 (zh) 2013-12-27

Family

ID=49768140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/081778 WO2013189364A1 (zh) 2012-10-12 2013-08-19 一种处理报文的方法和装置

Country Status (7)

Country Link
US (1) US9584332B2 (zh)
EP (1) EP2830269B1 (zh)
JP (1) JP5892500B2 (zh)
CN (1) CN103731368B (zh)
ES (1) ES2684559T3 (zh)
RU (1) RU2595764C2 (zh)
WO (1) WO2013189364A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789734A (zh) * 2016-12-21 2017-05-31 中国电子科技集团公司第三十二研究所 在交换控制电路中巨帧的控制系统及方法
CN106789730A (zh) * 2016-12-29 2017-05-31 杭州迪普科技股份有限公司 分片报文的处理方法及装置

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106537858B (zh) 2014-08-07 2019-07-19 华为技术有限公司 一种队列管理的方法和装置
CN107222435B (zh) * 2016-03-21 2020-07-24 深圳市中兴微电子技术有限公司 消除报文的交换头阻的方法及装置
CN107526691B (zh) * 2016-06-21 2020-06-02 深圳市中兴微电子技术有限公司 一种缓存管理方法及装置
CN106656438B (zh) * 2017-01-03 2019-07-23 国家电网公司 一种goose报文序列的生成和编辑方法
CN109660471B (zh) * 2018-12-14 2022-08-16 锐捷网络股份有限公司 基于fpga的指针回收方法及装置
CN110011920B (zh) * 2019-04-11 2021-03-23 盛科网络(苏州)有限公司 一种报文处理方法及装置
CN110445721B (zh) * 2019-09-09 2021-12-14 迈普通信技术股份有限公司 一种报文转发方法及装置
CN113157465B (zh) * 2021-04-25 2022-11-25 无锡江南计算技术研究所 基于指针链表的消息发送方法及装置
CN115497273B (zh) * 2022-04-22 2024-01-09 北京临近空间飞行器系统工程研究所 装订描述方法和基于装订参数链表的无线指令控制方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050036502A1 (en) * 2003-07-23 2005-02-17 International Business Machines Corporation System and method for handling multicast traffic in a shared buffer switch core collapsing ingress VOQ's
CN101150490A (zh) * 2006-09-23 2008-03-26 华为技术有限公司 一种单播和多播业务数据包的队列管理方法和系统
CN101729407A (zh) * 2009-12-04 2010-06-09 西安电子科技大学 基于单多播区分处理的低时延抖动交换方法及设备
CN101835102A (zh) * 2010-05-19 2010-09-15 迈普通信技术股份有限公司 一种用于无线局域网的队列管理方法以及无线接入设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002281080A (ja) * 2001-03-19 2002-09-27 Fujitsu Ltd パケットスイッチ装置およびマルチキャスト送出方法
US6918005B1 (en) * 2001-10-18 2005-07-12 Network Equipment Technologies, Inc. Method and apparatus for caching free memory cell pointers
US7394822B2 (en) 2002-06-04 2008-07-01 Lucent Technologies Inc. Using reassembly queue sets for packet reassembly
US7397809B2 (en) * 2002-12-13 2008-07-08 Conexant Systems, Inc. Scheduling methods for combined unicast and multicast queuing
US7546234B1 (en) * 2003-01-08 2009-06-09 Xambala, Inc. Semantic processing engine
US7586911B2 (en) * 2003-10-17 2009-09-08 Rmi Corporation Method and apparatus for packet transmit queue control
US7860097B1 (en) * 2004-02-13 2010-12-28 Habanero Holdings, Inc. Fabric-backplane enterprise servers with VNICs and VLANs
JP2005323231A (ja) * 2004-05-11 2005-11-17 Nippon Telegr & Teleph Corp <Ntt> パケット通信品質制御装置
WO2006086553A2 (en) * 2005-02-09 2006-08-17 Sinett Corporation Queuing and scheduling architecture for a unified access device supporting wired and wireless clients
RU2447595C2 (ru) * 2007-03-16 2012-04-10 Интердиджитал Текнолоджи Корпорейшн Способ и устройство беспроводной связи для поддержки реконфигурации параметров управления радиолинии
US8223788B1 (en) * 2007-10-24 2012-07-17 Ethernity Networks Ltd Method and system for queuing descriptors
CN102447610B (zh) * 2010-10-14 2015-05-20 中兴通讯股份有限公司 实现报文缓存资源共享的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050036502A1 (en) * 2003-07-23 2005-02-17 International Business Machines Corporation System and method for handling multicast traffic in a shared buffer switch core collapsing ingress VOQ's
CN101150490A (zh) * 2006-09-23 2008-03-26 华为技术有限公司 一种单播和多播业务数据包的队列管理方法和系统
CN101729407A (zh) * 2009-12-04 2010-06-09 西安电子科技大学 基于单多播区分处理的低时延抖动交换方法及设备
CN101835102A (zh) * 2010-05-19 2010-09-15 迈普通信技术股份有限公司 一种用于无线局域网的队列管理方法以及无线接入设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2830269A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789734A (zh) * 2016-12-21 2017-05-31 中国电子科技集团公司第三十二研究所 在交换控制电路中巨帧的控制系统及方法
CN106789734B (zh) * 2016-12-21 2020-03-13 中国电子科技集团公司第三十二研究所 在交换控制电路中巨帧的控制系统及方法
CN106789730A (zh) * 2016-12-29 2017-05-31 杭州迪普科技股份有限公司 分片报文的处理方法及装置
CN106789730B (zh) * 2016-12-29 2020-02-11 杭州迪普科技股份有限公司 分片报文的处理方法及装置

Also Published As

Publication number Publication date
CN103731368A (zh) 2014-04-16
CN103731368B (zh) 2017-10-27
EP2830269B1 (en) 2018-05-23
US20150304124A1 (en) 2015-10-22
US9584332B2 (en) 2017-02-28
RU2595764C2 (ru) 2016-08-27
EP2830269A1 (en) 2015-01-28
JP5892500B2 (ja) 2016-03-23
ES2684559T3 (es) 2018-10-03
EP2830269A4 (en) 2015-05-13
RU2014141198A (ru) 2016-06-10
JP2015511790A (ja) 2015-04-20

Similar Documents

Publication Publication Date Title
WO2013189364A1 (zh) 一种处理报文的方法和装置
US10277518B1 (en) Intelligent packet queues with delay-based actions
US20210255987A1 (en) Programmed Input/Output Mode
US9400606B2 (en) System and method for efficient buffer management for banked shared memory designs
WO2019033857A1 (zh) 报文控制方法及网络装置
US9864633B2 (en) Network processor having multicasting protocol
US10313255B1 (en) Intelligent packet queues with enqueue drop visibility and forensics
WO2012162949A1 (zh) 一种报文重组重排序方法、装置和系统
US10735339B1 (en) Intelligent packet queues with efficient delay tracking
US9112708B1 (en) Processing multicast packets in a network device
US8432908B2 (en) Efficient packet replication
US11949601B1 (en) Efficient buffer utilization for network data units
US9274586B2 (en) Intelligent memory interface
CN116114233A (zh) 自动流管理
AU2014336967B2 (en) Network interface
WO2016202113A1 (zh) 一种队列管理方法、装置及存储介质
WO2018000820A1 (zh) 一种队列管理方法和装置
US20130110968A1 (en) Reducing latency in multicast traffic reception
WO2013078873A1 (zh) 识别应答报文的方法及设备
WO2021147877A1 (zh) 用于静态分布式计算架构的数据交换系统及其方法
US11201831B1 (en) Packed ingress interface for network apparatuses
CN114186163A (zh) 一种应用层网络数据缓存方法
CN115988574B (zh) 基于流表的数据处理方法、系统、设备和存储介质
US11831567B1 (en) Distributed link descriptor memory
WO2024016975A1 (zh) 报文转发方法、装置、设备及芯片系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13807242

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14395831

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2013807242

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2014560246

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2014141198

Country of ref document: RU

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE