WO2012055319A1 - Method and device for dispatching tcam (telecommunication access method) query and refreshing messages - Google Patents

Method and device for dispatching tcam (telecommunication access method) query and refreshing messages Download PDF

Info

Publication number
WO2012055319A1
WO2012055319A1 PCT/CN2011/080616 CN2011080616W WO2012055319A1 WO 2012055319 A1 WO2012055319 A1 WO 2012055319A1 CN 2011080616 W CN2011080616 W CN 2011080616W WO 2012055319 A1 WO2012055319 A1 WO 2012055319A1
Authority
WO
WIPO (PCT)
Prior art keywords
query
message
refresh
queue
query message
Prior art date
Application number
PCT/CN2011/080616
Other languages
French (fr)
Chinese (zh)
Inventor
伍益荣
李维民
朱寅
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2012055319A1 publication Critical patent/WO2012055319A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a method and apparatus for scheduling a TCAM (Ternary Content Addressable Memory) query and refresh message.
  • TCAM Ternary Content Addressable Memory
  • the TCAM is used to quickly search for ACLs (Access Control Lists) and routes.
  • ACLs Access Control Lists
  • the TCAM lookup and refresh technology based on FPGA Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • table item update and query scheduling in which the FPGA plays a relay role between the processor or the CPU and the TCAM.
  • interconnected devices such as routers and switches
  • TCAM applications are becoming more common in order to achieve fast table lookup forwarding.
  • the application of multi-core processors is becoming more and more extensive. Multiple processor cores can be combined to provide high processing power. In order to make full use of the resources of each single-core processor, it will be reported when forwarding.
  • the processing of the text is distributed to each processor unit.
  • FIG. 1 is a structural block diagram of a related art FPGA-based TCAM query and refresh system, which includes a processor, a CPU interface, an FPGA, a TCAM unit, and an SSRAM (Serial Static Random Access Memory), and the SSRAM is used to store a routing table.
  • a processor a CPU interface, an FPGA, a TCAM unit, and an SSRAM (Serial Static Random Access Memory), and the SSRAM is used to store a routing table.
  • SSRAM Serial Static Random Access Memory
  • the FPGA of the technology puts the TCAM query and the refresh request of the CPU for the table in the same queue, and schedules the request in the queue based on the priority of the query and the refresh, wherein the CPU has higher priority for refreshing the entry than the processing.
  • the priority of the TCAM query This method of prioritized scheduling makes the query and refresh coordination more compact. When there are a large number of entries, the response speed of the query will be very low, which will easily block the packets on the network and affect the throughput of the network device. .
  • the present invention provides a method and apparatus (including an FPGA device and a network device) for scheduling a TCAM query and refresh message to at least solve the above problem of slow query response caused by a refresh priority higher than a query priority.
  • a method for scheduling a TCAM query and refresh message including: after receiving an inquiry message, the FPGA puts the query message into a query message queue; after receiving the refresh message, the FPGA refreshes the message. Put into the refresh message queue; the FPGA schedules the query message in the query message queue and the refresh message in the refresh message queue, respectively.
  • a plurality of query message queues are set on the FPGA, and the query message queues are in one-to-one correspondence with the processor;
  • the FPGA puts the query message into the query message queue, including: the FPGA puts the query message into the query message queue corresponding to the processor number carried in the query message; the FPGA schedules the query message in the query message queue to include: the FPGA adopts polling mode scheduling Multiple query message queues are used to queue the query messages in the scheduled query message queue.
  • the FPGA dequeues the query message in the scheduled query message queue, including:
  • the FPGA uses the first-in-first-out FIFO method to queue the query messages in the scheduled query message queue.
  • scheduling the refresh message in the refresh message queue by the FPGA comprises: the FPGA scheduling the refresh message in the refresh message queue by using a first-in-first-out FIFO.
  • the method further includes: receiving, by the FPGA, the query result of the query message, and returning the query result to the processor corresponding to the query message; the processor acquiring the routing information according to the query result, according to the route Information forwarding message.
  • an FPGA device including: a query message enqueue module, configured to: after receiving a query message, put the query message into a query message queue; refresh the message enqueue module, set to receive After the message is refreshed, the refresh message is placed in the refresh message queue; the query scheduling module is configured to schedule the query message in the query message queue; and the refresh scheduling module is configured to schedule the refresh message in the refresh message queue.
  • the query message enqueue module includes: a queue determining unit, configured to: after receiving the query message, determine a corresponding query message queue according to the processor number carried in the query message; wherein, the FPGA device is configured with multiple query message queues, And the query message queue has a one-to-one correspondence with the processor; the enqueue unit is configured to put the query message into the query message queue determined by the queue determining unit; the query scheduling module includes: a polling scheduling unit, configured to use the polling mode to schedule multiple Query message queue; the dequeuing unit is set to perform queue processing on the query message in the query message queue scheduled by the polling scheduling unit.
  • the dequeuing unit comprises: a dequeue subunit, configured to perform a queue processing of the query message in the query message queue scheduled by the polling scheduling unit by using a first in first out FIFO.
  • the refresh scheduling module includes: a refresh scheduling unit configured to schedule a refresh message in the refresh message queue by using a first-in first-out FIFO.
  • a network device including the foregoing FPGA device, the network device further comprising: a processor, configured to send a query message to the FPGA device, and receive a query result returned by the FPGA device, according to the query result Obtaining routing information, and performing packet forwarding according to the routing information; the CPU is configured to send a refresh message to the FPGA device, where the refresh message carries indication information for performing a refresh operation on the scheduled tri-state content addressing memory TCAM.
  • two branches are set on the FPGA, that is, the query processing branch and the refresh processing branch are used, and the two branches are separately processed without mutual interference, thereby solving the query response caused by the refresh priority being higher than the query priority.
  • FIG. 1 is a structural block diagram of an FPGA-based TCAM query and refresh system according to the related art
  • FIG. 2 is a flow chart of a method for scheduling a TCAM query and refresh message according to Embodiment 1 of the present invention
  • FIG. 4 is a structural block diagram of a network device according to Embodiment 2 of the present invention
  • FIG. 5 is a flowchart of a method for querying message enqueue and dequeue scheduling according to Embodiment 2 of the present invention
  • FIG. 6 is a schematic diagram of a query message enqueue and dequeue schedule according to Embodiment 2 of the present invention
  • FIG. 7 is a flowchart of a method for querying a TCAM query entry according to Embodiment 2 of the present invention
  • FIG. 8 is a flowchart according to Embodiment 2 of the present invention
  • FIG. 9 is a block diagram showing the structure of an FPGA device according to Embodiment 3 of the present invention
  • FIG. 10 is a block diagram showing the structure of a network device according to Embodiment 4 of the present invention.
  • BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, the present invention will be described in detail with reference to the accompanying drawings. It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict.
  • Embodiment 1 FIG. 2 is a flowchart of a method for scheduling a TCAM query and refresh message according to an embodiment of the present invention. The method includes the following steps: Step S202: After receiving a query message, the field programmable gate array FPGA queries The message is placed in the query message queue. Step S204, after receiving the refresh message, the FPGA puts the refresh message into the refresh message queue.
  • Step S206 The FPGA separately schedules the query message in the query message queue and the refresh message in the refresh message queue.
  • the above FPGA stores the TCAM query and refresh messages in separate ways, which enables parallel scheduling query and refresh.
  • the plurality of query message queues are set on the FPGA, and the query message queue is in one-to-one correspondence with the processor; correspondingly, step S202 includes: The query message is placed in the query message queue corresponding to the processor number carried in the query message; the FPGA in step S206 schedules the query message in the query message queue to include: the FPGA uses a polling manner to schedule multiple query message queues, and the The query message in the scheduled query message queue is queued.
  • the so-called polling scheduling means that each query message queue is sequentially scheduled in a certain order, and the number of times each query message queue is scheduled is substantially the same in a period of time.
  • the FPGA performs dequeue processing on the query message in the scheduled query message queue, including: the FPGA uses a first in first out (FIFO) manner to dequeue the query message in the scheduled query message queue. deal with.
  • the FPGA schedules the refresh message in the refresh message queue to include: The FPGA uses the first-in-first-out FIFO to schedule the refresh message in the refresh message queue.
  • the method may further include: the FPGA receiving the query result of the query message, and returning the query result to the processor corresponding to the query message; the processor acquiring the routing information according to the query result, according to the routing information Forward the message.
  • the TCAM entry will be refreshed, and the frequency of these changes is low, which makes the prioritized scheduling redundant, so this implementation
  • the example does not set the priority for the refresh schedule and the query schedule, but stores the two in different queues separately, and schedules the stored queues separately.
  • FIG. 3 is a structural diagram of a cache queue according to an embodiment of the present invention.
  • the cache queue includes a plurality of query message queues and a refresh message queue, wherein sl-s5 represents a query message, and ul-u4 represents a refresh message, which is specifically described as follows:
  • the query message queue is multiple, corresponding to each processor setting, and one query message queue is used to cache the query message sent by the same processor.
  • the query message in the embodiment of the present invention may include: a processor number, a type of the queryed item. , the size of the query content and the content of the query.
  • the processor number is used to determine the queue number into which the query message is entered, and the processor returned by the query result; the entry type indicates whether the entry is an ACL or a route, or other entries; Indicates how many bits of the query, such as 144/256; the query content is the condition for input search, for example, the content of the route input is the destination IP, and the content of the ACL search input is the IP quintuple of the message, and the IP quintuple includes Source IP address, destination IP address, source port number, destination port number, and protocol type.
  • a message number may be set for the query message to identify the sequence in which the query message enters in the query message queue. The message queue is refreshed.
  • the data structure of the refresh message includes refreshing the message number, refreshing the type of the entry, and refreshing the content.
  • the length of the cache queue is the integer power of 2, and the location of the message in the cache queue can be found directly by using the query message number or the lower address of the refresh message number. For example, the length of the cache queue is 32, which is the 5th power of 2, and the message number is taken.
  • the binary number is 5 bits lower as its position in the buffer queue.
  • the message number is 57, its binary number is 111001, the lower 5 bits are 11001, and the decimal is 25, then the message enters the position of the buffer queue 25.
  • the above method can be applied to a multi-core processor or a plurality of processors using FPGA to perform TCAM query and refresh processing. Since two branches are set on the FPGA, that is, the query processing branch and the refresh processing branch, the two branches are separately processed. Do not interfere with each other, and solve the problem caused by the refresh priority being higher than the query priority. The problem of slow response is able to provide high-speed table lookup forwarding and table item refreshing, which enables fast forwarding, improves the throughput of network devices, and improves the performance of network devices.
  • Embodiment 2 This embodiment provides a method for scheduling a TCAM query and a refresh message. The method is described as an example on the network device shown in FIG. 4.
  • the network device shown in FIG. 4 includes the following functional units:
  • the processor unit is connected to the FPGA through the query channel, and includes a plurality of processors therein, respectively represented by the processor 1, the processor 2, the processor n, and the plurality of single-core processors or Multiple processors can simultaneously issue TCAM query requests for different entries.
  • the query channel is responsible for transmitting the query request sent from each processor and the query result returned from the FPGA, and the processor accesses the entry stored in the processor peripheral according to the result of the query, and obtains information required for packet forwarding, Implement packet forwarding.
  • the CPU is connected to the FPGA through the refresh channel, and also adds, deletes, and updates the entries in the TCAM through the FPGA, and simultaneously modifies the entries in the processor peripheral.
  • processor peripherals including SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and DDR (Double Data Rate)
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • DDR Double Data Rate
  • the FPGA including the query processing unit and the refresh processing unit, respectively respond to the query and refresh request issued by the processor and the CPU, and the query processing and the refresh processing operate independently.
  • the processor can still perform TCAM query.
  • the query processing unit sets a plurality of FIFO query cache queues according to the number of single core processors (corresponding to the query message queue in Embodiment 1), each processor corresponds to one FIFO queue, and polling scheduling is used between the queues. the rules.
  • the query processing unit distributes the query message to the corresponding query queue according to the processor number.
  • the function of the refresh processing unit is to quickly respond to the CPU update command and update the entries in the TCAM.
  • TCAM unit configured to respond to a refresh message sent by the CPU through the FPGA, update the entry of the entry; and respond to the query message sent by the processor through the FPGA, and return the query result.
  • this embodiment provides a method for querying message enqueue and dequeue scheduling.
  • the query processing unit of the FPGA in this embodiment maintains a queue state vector, and the queue state vector is a binary.
  • Step S502 querying a message into a team, specifically: after receiving the query message, the query processing unit separately enqueues according to the processor number of the query message, and numbers the query message according to the queue of the enqueue
  • Step 5 execute the scheduling of the next queue, and skip to step 3.
  • the FPGA puts the query message into the corresponding queue according to the processor number of the query message, and uses the polling manner to dequeue the query messages of each queue.
  • FIG. 7 shows a flowchart of a method for querying a TCAM entry according to this embodiment, the method comprising the following steps: Step S702, processor 1, processor 2, ...
  • the processor n issues a query message according to the need, the query message includes the processor number, the type of the queryed item, the size of the query content, and the query content, and the query message is transmitted to the FPGA through the query channel; Step S704, the FPGA identifies The query message is sent, and the query message is entered into the queue by the processor number;
  • the query processing unit of the FPGA maintains multiple query message queues, each processor corresponds to one queue, and the query message is queued according to the processor number, and multiple query message queues are scheduled according to the polling manner, and each queue is internally FIFO-based. The principle is scheduled, the TCAM query is performed, and the query result is returned to the requesting processor.
  • Step S706 the query processing unit dequeues the query message, enters the TCAM query, and returns the result of the query to the corresponding processor according to the processor number.
  • Step S708 the processor according to the result of the TCAM query, that is, the entry information in the processor
  • the address in the peripheral device reads the specific content of the entry.
  • step S710 the processor forwards the packet according to the content of the query entry.
  • FIG. 8 is a flowchart of a method for a CPU to refresh an entry according to the embodiment. The method includes the following steps: Step S802, the CPU issues an entry refresh message, and refreshes the message.
  • the refresh message is transmitted to the FPGA through the refresh channel; Step S804, the FPGA recognizes the refresh message, and the refresh message is queued; Step S806, the refresh message is scheduled to be dequeued according to the principle of first in first out; Step S808, the TCAM receives the refresh message to update the entry of the entry, including the adding, deleting, and modifying operations. Step S810, the CPU updates the entry in the processor peripheral entry, including the adding, deleting, and modifying operations.
  • the processor in the above method uses the FPGA relay to perform the query access of the TCAM, and the FPGA returns a pointer or an index pointing to the address of the table entry in the peripheral, and the processor reads the entry in the processor peripheral according to the returned result;
  • the CPU uses the FPGA relay to perform the TCAM refresh operation (ie, the update operation of the TCAM entry), and simultaneously updates the corresponding entry information in the processor peripheral.
  • the scheduling method provided in this embodiment supports the parallel processing of the query and the refresh operation, and the entry of the entry can be performed at the same time as the query, and the entry of the entry can be queried at the same time.
  • FIG. 9 is a structural block diagram of an FPGA device according to an embodiment of the present invention.
  • the device includes: a query message enqueue module 92, configured to place a query message into a query message queue after receiving a query message;
  • the refresh message enqueue module 94 is configured to: after receiving the refresh message, put the refresh message into the refresh message queue;
  • the query scheduling module 96 is connected to the query message enqueue module 92, and is configured to schedule the query message in the query message queue.
  • the refresh scheduling module 98 is coupled to the refresh message enqueue module 94 and configured to schedule refresh messages in the refresh message queue.
  • the query message enqueue module 92 includes: a queue determining unit, configured to: after receiving the query message, determine a corresponding query message queue according to the processor number carried in the query message; wherein the FPGA device is configured with multiple query message queues, and the query The message queue is in one-to-one correspondence with the processor; the enqueue unit is configured to put the query message into the query message queue determined by the queue determining unit; the query scheduling module 96 includes: a polling scheduling unit, configured to schedule multiple by using a polling manner Query message queue; the dequeuing unit is set to perform queue processing on the query message in the query message queue scheduled by the polling scheduling unit.
  • the dequeuing unit comprises: a dequeue subunit, configured to perform a queue out processing on the query message in the query message queue scheduled by the polling scheduling unit by using a first in first out FIFO.
  • the refresh scheduling module 98 includes: a refresh scheduling unit configured to schedule a refresh message in the refresh message queue by using a first in first out FIFO.
  • the query message includes: a processor number, a type of the queryed item, a size of the query content, and a query content.
  • the processor number is used to determine the queue number to which the query message is entered, and the processor to which the query result is returned; the entry type identifier is the query of the entry, whether it is an ACL or a route, or other entries; How many queries, such as 144/256;
  • the query content is the condition for input search.
  • the content of the route input is the destination IP
  • the content of the ACL search input is the IP quintuple of the packet.
  • the IP quintuple includes the source IP. Address, destination IP address, source port number, destination port number, and protocol type.
  • the multiple processors in this embodiment share a FIFO buffer queue.
  • only one refresh message queue is set because the CPU performs a refresh operation on the TCAM entry in a practical application, which is generally caused by a user configuration change. Refresh the entry.
  • the data structure of the refresh message includes refreshing the message number, refreshing the type of the entry, and refreshing the content.
  • the above FPGA device can be applied to a multi-core processor or a plurality of processors for performing TCAM query and refresh processing in an FPGA relay. Since two branches are set on the FPGA device, that is, a query processing branch and a refresh processing branch, the two branches are separately processed. It does not interfere with each other, and solves the problem that the query response is slow due to the refresh priority being higher than the query priority.
  • FIG. 10 is a block diagram showing the structure of a network device including an FPGA device 102, a processor 104, and a CPU 106.
  • the FPGA device 102 is connected to the processor 104 and the CPU 106, respectively, according to an embodiment of the present invention.
  • the FPGA device 102 can be implemented in the manner of Embodiment 3, and will not be described in detail herein.
  • the processor 104 is configured to send a query message to the FPGA device 102, and receive a query result returned by the FPGA device, obtain routing information according to the query result, and perform message forwarding according to the routing information;
  • the CPU 106 is configured to send a refresh message to the FPGA device 102, where the refresh message carries indication information for performing a refresh operation on the scheduling TCAM.
  • the network device in this embodiment can also be implemented according to the network device shown in FIG. 4 in Embodiment 2. The specific functions are the same as those in the present embodiment, and details are not described herein again.
  • the network device in this embodiment sets two branches on the FPGA device, that is, the query processing branch and the refresh processing branch, and separately processes the two branches without mutual interference, thereby solving the problem that the refresh priority is higher than the query priority.
  • the slow response of the query can provide high-speed table lookup forwarding and table item refreshing, enabling fast forwarding, improving the throughput of network devices, and improving the performance of network devices.
  • the technology provided by the foregoing embodiment by separately processing the query and the refresh, makes the processing of the query and the refresh do not interfere with each other, and improves the efficiency of the query and the refresh; separately separates the query messages of different processors into
  • the team implements parallel query, query queue polling scheduling of each single core processor, and balances the performance of each single core processor. It can quickly respond to the processor's query to TCAM and refresh the entries, achieve fast forwarding, improve the throughput of network devices, and improve the performance of network devices.
  • the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices.
  • the computing device may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

A method and a device for dispatching TCAM (telecommunication access method) query and refreshing messages are provided. The method comprises that a FPGA (field-programmable gate array) places a query message into a query message queue after receiving the query message; the FPGA places a refreshing message into a refreshing message queue after receiving the refreshing message; and the FPGA respectively dispatches the query message in the query message queue and the refreshing message in the refreshing message queue. The problem of slow query response due to that the refresh priority is higher than the query priority can be solved by using the solution of the invention, the high-speed table search retransmission and the table item refreshing can be provided, the fast forwarding can be realized, the throughput capacity of a network device can be upgraded and the performances of the network device can be further improved.

Description

调度 TCAM查询和刷新消息的方法和装置 技术领域 本发明涉及网络通信技术领域, 尤其涉及一种调度 TCAM ( Ternary Content Addressable Memory, 三态内容寻址存储器) 查询和刷新消息的方法和装置。 背景技术  TECHNICAL FIELD The present invention relates to the field of network communication technologies, and in particular, to a method and apparatus for scheduling a TCAM (Ternary Content Addressable Memory) query and refresh message. Background technique
TCAM主要用于网络设备报文转发时快速查找 ACL (Access Control List, 访问控 制链表)、 路由等表项。 基于 FPGA (Field Programmable Gate Array, 现场可编程门阵 列) 的 TCAM查找及刷新技术提供表项更新和查询调度, 其中 FPGA在处理器或者 CPU和 TCAM之间起到中转作用。在路由器和交换机等互连设备上,为了实现快速查 表转发, TCAM的应用越来越普遍。 随着宽带网络的迅速发展, 多核处理器的应用也越来越广泛, 多个处理器内核集 合起来可以提供很高的处理能力, 为了充分利用每个单核处理器的资源, 转发时将报 文的处理分散到各个处理器单元, 单个处理器单元都需要对报文进行 ACL、 路由等表 项的查找, 同时 CPU需要对 TCAM中的表项条目以及处理器外设中的表项内容进行 刷新操作, 多个处理器需要共同访问单一的 TCAM 外设, 如何使多个处理器实现 TCAM的快速查表转发和表项条目刷新, 并且使得各处理器的性能均衡, 这就是基于 FPGA的 TCAM查询及刷新装置需要解决的问题。 如图 1示出了相关技术基于 FPGA的 TCAM查询及刷新系统的结构框图,其包括 处理器、 CPU接口、 FPGA、 TCAM单元和 SSRAM (串行静态随机存储器), SSRAM 用于存放路由表。 该技术的 FPGA将 TCAM查询和 CPU对表项的刷新请求放在同一 个队列中, 基于查询和刷新的优先级对队列中的请求进行调度, 其中, CPU对表项的 刷新优先级高于处理器对 TCAM查询的优先级。这种分优先级调度的方法, 使得查询 和刷新的藕合度比较紧密, 当有大量表项更新时, 查询的响应速度将非常低, 易造成 网络中报文的阻塞, 影响网络设备的吞吐能力。 发明内容 本发明提供了一种调度 TCAM查询和刷新消息的方法和装置(包括 FPGA装置和 网络设备), 以至少解决上述因刷新优先级高于查询优先级引起的查询响应较慢的问 题。 根据本发明的一个方面, 提供了一种调度 TCAM查询和刷新消息的方法, 包括: FPGA收到查询消息后, 将该查询消息放入查询消息队列; FPGA收到刷新消息后, 将该刷新消息放入刷新消息队列; FPGA分别对查询消息队列中的查询消息和刷新消 息队列中的刷新消息进行调度。 优选地, FPGA上设置有多个查询消息队列, 且查询消息队列与处理器一一对应;The TCAM is used to quickly search for ACLs (Access Control Lists) and routes. The TCAM lookup and refresh technology based on FPGA (Field Programmable Gate Array) provides table item update and query scheduling, in which the FPGA plays a relay role between the processor or the CPU and the TCAM. On interconnected devices such as routers and switches, TCAM applications are becoming more common in order to achieve fast table lookup forwarding. With the rapid development of broadband networks, the application of multi-core processors is becoming more and more extensive. Multiple processor cores can be combined to provide high processing power. In order to make full use of the resources of each single-core processor, it will be reported when forwarding. The processing of the text is distributed to each processor unit. Each processor unit needs to search for the ACL, routing, and other entries of the packet. At the same time, the CPU needs to perform the entry of the entry in the TCAM and the contents of the entry in the processor peripheral. Refresh operation, multiple processors need to access a single TCAM peripheral together, how to enable multiple processors to implement TCAM fast table lookup forwarding and table entry refresh, and balance the performance of each processor, this is FPGA-based TCAM Query and refresh the device to solve the problem. FIG. 1 is a structural block diagram of a related art FPGA-based TCAM query and refresh system, which includes a processor, a CPU interface, an FPGA, a TCAM unit, and an SSRAM (Serial Static Random Access Memory), and the SSRAM is used to store a routing table. The FPGA of the technology puts the TCAM query and the refresh request of the CPU for the table in the same queue, and schedules the request in the queue based on the priority of the query and the refresh, wherein the CPU has higher priority for refreshing the entry than the processing. The priority of the TCAM query. This method of prioritized scheduling makes the query and refresh coordination more compact. When there are a large number of entries, the response speed of the query will be very low, which will easily block the packets on the network and affect the throughput of the network device. . SUMMARY OF THE INVENTION The present invention provides a method and apparatus (including an FPGA device and a network device) for scheduling a TCAM query and refresh message to at least solve the above problem of slow query response caused by a refresh priority higher than a query priority. According to an aspect of the present invention, a method for scheduling a TCAM query and refresh message is provided, including: after receiving an inquiry message, the FPGA puts the query message into a query message queue; after receiving the refresh message, the FPGA refreshes the message. Put into the refresh message queue; the FPGA schedules the query message in the query message queue and the refresh message in the refresh message queue, respectively. Preferably, a plurality of query message queues are set on the FPGA, and the query message queues are in one-to-one correspondence with the processor;
FPGA将查询消息放入查询消息队列包括: FPGA将查询消息放入查询消息携带的处 理器编号对应的查询消息队列中; FPGA对查询消息队列中的查询消息进行调度包括: FPGA采用轮询方式调度多个查询消息队列, 对被调度的查询消息队列中的查询消息 进行出队列处理。 优选地, FPGA对被调度的查询消息队列中的查询消息进行出队列处理包括:The FPGA puts the query message into the query message queue, including: the FPGA puts the query message into the query message queue corresponding to the processor number carried in the query message; the FPGA schedules the query message in the query message queue to include: the FPGA adopts polling mode scheduling Multiple query message queues are used to queue the query messages in the scheduled query message queue. Preferably, the FPGA dequeues the query message in the scheduled query message queue, including:
FPGA采用先进先出 FIFO 的方式对被调度的查询消息队列中的查询消息进行出队列 处理。 优选地, FPGA对刷新消息队列中的刷新消息进行调度包括: FPGA采用先进先 出 FIFO的方式对刷新消息队列中的刷新消息进行调度。 优选地, FPGA对查询消息队列中的查询消息进行调度之后, 还包括: FPGA接 收查询消息的查询结果, 将查询结果返回给查询消息对应的处理器; 处理器根据查询 结果获取路由信息, 根据路由信息转发报文。 根据本发明的另一方面, 提供了一种 FPGA装置, 包括: 查询消息入队模块, 设 置为收到查询消息后, 将该查询消息放入查询消息队列; 刷新消息入队模块, 设置为 收到刷新消息后, 将该刷新消息放入刷新消息队列; 查询调度模块, 设置为对查询消 息队列中的查询消息进行调度; 刷新调度模块, 设置为对刷新消息队列中的刷新消息 进行调度。 优选地, 查询消息入队模块包括: 队列确定单元, 设置为接收到查询消息后, 根 据查询消息携带的处理器编号确定对应的查询消息队列; 其中, FPGA装置上设置有 多个查询消息队列, 且查询消息队列与处理器一一对应; 入队单元, 设置为将查询消 息放入队列确定单元确定的查询消息队列中; 查询调度模块包括: 轮询调度单元, 设 置为采用轮询方式调度多个查询消息队列; 出队单元, 设置为对轮询调度单元调度的 查询消息队列中的查询消息进行出队列处理。 优选地, 出队单元包括: 出队子单元, 设置为采用先进先出 FIFO 的方式对轮询 调度单元调度的查询消息队列中的查询消息进行出队列处理。 优选地, 刷新调度模块包括: 刷新调度单元, 设置为采用先进先出 FIFO 的方式 对刷新消息队列中的刷新消息进行调度。 根据本发明的又一方面, 提供了一种网络设备, 包括上述 FPGA装置, 该网络设 备还包括: 处理器, 设置为向 FPGA装置发送查询消息, 以及接收 FPGA装置返回的 查询结果, 根据查询结果获取路由信息, 根据路由信息进行报文转发; CPU, 设置为 向 FPGA装置发送刷新消息,该刷新消息携带有对调度三态内容寻址存储器 TCAM进 行刷新操作的指示信息。 通过本发明, 采用 FPGA上设置两个分支, 即查询处理分支和刷新处理分支, 对 两个分支采用单独进行处理, 互不干扰, 解决了因刷新优先级高于查询优先级引起的 查询响应较慢的问题, 能够提供高速的查表转发和表项刷新, 实现快速转发, 提升了 网络设备的吞吐能力, 进而提高了网络设备的性能。 附图说明 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部分, 本发 明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的不当限定。 在附图 中: 图 1是根据相关技术的基于 FPGA的 TCAM查询及刷新系统的结构框图; 图 2是根据本发明实施例 1的调度 TCAM查询和刷新消息的方法流程图; 图 3是根据本发明实施例 1的提供的缓存队列结构图; 图 4是根据本发明实施例 2的网络设备的结构框图; 图 5是根据本发明实施例 2的查询消息入队和出队调度的方法流程图; 图 6是根据本发明实施例 2的查询消息入队和出队调度的示意图; 图 7是根据本发明实施例 2的 TCAM查询表项的方法流程图; 图 8是根据本发明实施例 2的 CPU对表项刷新的方法流程图; 图 9是根据本发明实施例 3的 FPGA装置的结构框图; 图 10是根据本发明实施例 4的网络设备的结构框图。 具体实施方式 下文中将参考附图并结合实施例来详细说明本发明。 需要说明的是, 在不冲突的 情况下, 本申请中的实施例及实施例中的特征可以相互组合。 实施例 1 图 2示出了根据本发明实施例的一种调度 TCAM查询和刷新消息的方法流程图, 该方法包括以下步骤: 步骤 S202, 现场可编程门阵列 FPGA收到查询消息后, 将查询消息放入查询消息 队列; 步骤 S204, FPGA收到刷新消息后, 将该刷新消息放入刷新消息队列; 步骤 S206, FPGA分别对查询消息队列中的查询消息和刷新消息队列中的刷新消 息进行调度。 上述 FPGA对 TCAM的查询和刷新消息采用分路进行存放,能够实现并行调度查 询和刷新。 为了实现对于多核处理器时, 各处理器 TCAM查询的均衡处理, 优选地, 上述 FPGA上设置有多个查询消息队列, 且查询消息队列与处理器一一对应; 相应地, 步 骤 S202包括: FPGA将查询消息放入查询消息携带的处理器编号对应的查询消息队列 中; 步骤 S206中的 FPGA对查询消息队列中的查询消息进行调度包括: FPGA采用轮 询方式调度多个查询消息队列, 对被调度的查询消息队列中的查询消息进行出队列处 理。 所谓轮询调度指对每个查询消息队列按照一定的顺序依次调度, 在一段时间内, 每个查询消息队列被调度的次数基本相同。 优选地, FPGA对被调度的查询消息队列中的查询消息进行出队列处理包括: FPGA采用先进先出 (FIFO, First In First Out) 的方式对被调度的查询消息队列中的 查询消息进行出队列处理。 FPGA对刷新消息队列中的刷新消息进行调度包括: FPGA采用先进先出 FIFO的 方式对刷新消息队列中的刷新消息进行调度。 上述 FPGA对查询消息队列中的查询消息进行调度之后, 还可以包括: FPGA接 收查询消息的查询结果, 将查询结果返回给查询消息对应的处理器; 处理器根据查询 结果获取路由信息, 根据路由信息转发报文。 在实际应用中, 一般当用户配置改变或者网络中链路状态发生变化时, TCAM表 项条目才会进行刷新, 而这些改变的频率较低, 这就使得分优先级调度有点多余, 所 以本实施例没有为刷新调度和查询调度设置优先级, 而是将二者分别存放在不同的队 列中, 对存放的队列分别进行调度。 参见图 3, 为本发明实施例提供的缓存队列结构图, 缓存队列包括多个查询消息 队列和一个刷新消息队列, 其中, sl-s5表示查询消息, ul-u4表示刷新消息, 具体介 绍如下: 查询消息队列为多个, 对应每个处理器设置, 一个查询消息队列用于缓存来自同 一个处理器发出的查询消息, 本发明实施例的查询消息可以包括: 处理器编号、 所查 询表项类型、 查询内容的大小以及查询内容。 其中, 处理器编号用于确定查询消息所 入的队列号, 以及查询结果返回的处理器; 表项类型表示是何种表项的查询, 是 ACL 或者路由, 还是其他表项; 查询内容的大小表示是多少位的查询, 比如 144/256; 查询 内容是输入查找的条件, 比如查路由输入的内容是目的 IP, ACL查找输入的内容是报 文的 IP五元组, 该 IP五元组包括源 IP址, 目的 IP地址, 源端口号, 目的端口号, 以 及协议类型。 在将查询消息放入对应的查询消息队列时, 可以为该查询消息设置消息 编号, 以标识该查询消息队列中查询消息进入的先后顺序。 刷新消息队列, 多个处理器共用一个 FIFO缓存队列, 只设一个缓存队列是由于 在实际应用中 CPU对 TCAM条目的刷新操作频率较低, 一般是在用户配置更改的情 况下才刷新表项。 刷新消息的数据结构包括刷新消息编号、 刷新条目的类型、 刷新内 容。 缓存队列的长度取 2的整数次方, 可以直接用查询消息编号或者刷新消息编号的 低位找到消息在缓存队列中的位置, 比如缓存队列长度为 32, 为 2的 5次方, 则取消 息编号的二进制数低 5位作为其在缓存队列中的位置, 例如, 消息编号为 57, 其二进 制数为 111001, 低 5位为 11001, 十进制为 25, 则该消息入缓存队列 25的位置。 上述方法可以应用于多核处理器或者多个处理器用 FPGA中转进行 TCAM查询及 刷新处理中, 由于其 FPGA上设置两个分支, 即查询处理分支和刷新处理分支, 对两 个分支采用单独进行处理, 互不干扰, 解决了因刷新优先级高于查询优先级引起的查 询响应较慢的问题, 能够提供高速的查表转发和表项刷新, 实现快速转发, 提升了网 络设备的吞吐能力, 进而提高网络设备的性能。 实施例 2 本实施例提供了一种调度 TCAM查询和刷新消息的方法,该方法以在图 4所示的 网络设备上实现为例进行说明, 图 4所示的网络设备包括如下功能单元: The FPGA uses the first-in-first-out FIFO method to queue the query messages in the scheduled query message queue. Preferably, scheduling the refresh message in the refresh message queue by the FPGA comprises: the FPGA scheduling the refresh message in the refresh message queue by using a first-in-first-out FIFO. Preferably, after the FPGA queries the query message in the query message queue, the method further includes: receiving, by the FPGA, the query result of the query message, and returning the query result to the processor corresponding to the query message; the processor acquiring the routing information according to the query result, according to the route Information forwarding message. According to another aspect of the present invention, an FPGA device is provided, including: a query message enqueue module, configured to: after receiving a query message, put the query message into a query message queue; refresh the message enqueue module, set to receive After the message is refreshed, the refresh message is placed in the refresh message queue; the query scheduling module is configured to schedule the query message in the query message queue; and the refresh scheduling module is configured to schedule the refresh message in the refresh message queue. Preferably, the query message enqueue module includes: a queue determining unit, configured to: after receiving the query message, determine a corresponding query message queue according to the processor number carried in the query message; wherein, the FPGA device is configured with multiple query message queues, And the query message queue has a one-to-one correspondence with the processor; the enqueue unit is configured to put the query message into the query message queue determined by the queue determining unit; the query scheduling module includes: a polling scheduling unit, configured to use the polling mode to schedule multiple Query message queue; the dequeuing unit is set to perform queue processing on the query message in the query message queue scheduled by the polling scheduling unit. Preferably, the dequeuing unit comprises: a dequeue subunit, configured to perform a queue processing of the query message in the query message queue scheduled by the polling scheduling unit by using a first in first out FIFO. Preferably, the refresh scheduling module includes: a refresh scheduling unit configured to schedule a refresh message in the refresh message queue by using a first-in first-out FIFO. According to still another aspect of the present invention, a network device is provided, including the foregoing FPGA device, the network device further comprising: a processor, configured to send a query message to the FPGA device, and receive a query result returned by the FPGA device, according to the query result Obtaining routing information, and performing packet forwarding according to the routing information; the CPU is configured to send a refresh message to the FPGA device, where the refresh message carries indication information for performing a refresh operation on the scheduled tri-state content addressing memory TCAM. Through the invention, two branches are set on the FPGA, that is, the query processing branch and the refresh processing branch are used, and the two branches are separately processed without mutual interference, thereby solving the query response caused by the refresh priority being higher than the query priority. The slow problem can provide high-speed table lookup forwarding and table item refreshing, achieve fast forwarding, improve the throughput of network devices, and improve the performance of network devices. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are set to illustrate,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, In the drawings: FIG. 1 is a structural block diagram of an FPGA-based TCAM query and refresh system according to the related art; FIG. 2 is a flow chart of a method for scheduling a TCAM query and refresh message according to Embodiment 1 of the present invention; FIG. 4 is a structural block diagram of a network device according to Embodiment 2 of the present invention; FIG. 5 is a flowchart of a method for querying message enqueue and dequeue scheduling according to Embodiment 2 of the present invention; FIG. 6 is a schematic diagram of a query message enqueue and dequeue schedule according to Embodiment 2 of the present invention; FIG. 7 is a flowchart of a method for querying a TCAM query entry according to Embodiment 2 of the present invention; FIG. 8 is a flowchart according to Embodiment 2 of the present invention; FIG. 9 is a block diagram showing the structure of an FPGA device according to Embodiment 3 of the present invention; FIG. 10 is a block diagram showing the structure of a network device according to Embodiment 4 of the present invention. BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, the present invention will be described in detail with reference to the accompanying drawings. It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. Embodiment 1 FIG. 2 is a flowchart of a method for scheduling a TCAM query and refresh message according to an embodiment of the present invention. The method includes the following steps: Step S202: After receiving a query message, the field programmable gate array FPGA queries The message is placed in the query message queue. Step S204, after receiving the refresh message, the FPGA puts the refresh message into the refresh message queue. Step S206: The FPGA separately schedules the query message in the query message queue and the refresh message in the refresh message queue. . The above FPGA stores the TCAM query and refresh messages in separate ways, which enables parallel scheduling query and refresh. In order to achieve equalization processing for each processor TCAM query for a multi-core processor, preferably, the plurality of query message queues are set on the FPGA, and the query message queue is in one-to-one correspondence with the processor; correspondingly, step S202 includes: The query message is placed in the query message queue corresponding to the processor number carried in the query message; the FPGA in step S206 schedules the query message in the query message queue to include: the FPGA uses a polling manner to schedule multiple query message queues, and the The query message in the scheduled query message queue is queued. The so-called polling scheduling means that each query message queue is sequentially scheduled in a certain order, and the number of times each query message queue is scheduled is substantially the same in a period of time. Preferably, the FPGA performs dequeue processing on the query message in the scheduled query message queue, including: the FPGA uses a first in first out (FIFO) manner to dequeue the query message in the scheduled query message queue. deal with. The FPGA schedules the refresh message in the refresh message queue to include: The FPGA uses the first-in-first-out FIFO to schedule the refresh message in the refresh message queue. After the FPGA queries the query message in the query message queue, the method may further include: the FPGA receiving the query result of the query message, and returning the query result to the processor corresponding to the query message; the processor acquiring the routing information according to the query result, according to the routing information Forward the message. In practical applications, when the user configuration changes or the link status changes in the network, the TCAM entry will be refreshed, and the frequency of these changes is low, which makes the prioritized scheduling redundant, so this implementation The example does not set the priority for the refresh schedule and the query schedule, but stores the two in different queues separately, and schedules the stored queues separately. FIG. 3 is a structural diagram of a cache queue according to an embodiment of the present invention. The cache queue includes a plurality of query message queues and a refresh message queue, wherein sl-s5 represents a query message, and ul-u4 represents a refresh message, which is specifically described as follows: The query message queue is multiple, corresponding to each processor setting, and one query message queue is used to cache the query message sent by the same processor. The query message in the embodiment of the present invention may include: a processor number, a type of the queryed item. , the size of the query content and the content of the query. The processor number is used to determine the queue number into which the query message is entered, and the processor returned by the query result; the entry type indicates whether the entry is an ACL or a route, or other entries; Indicates how many bits of the query, such as 144/256; the query content is the condition for input search, for example, the content of the route input is the destination IP, and the content of the ACL search input is the IP quintuple of the message, and the IP quintuple includes Source IP address, destination IP address, source port number, destination port number, and protocol type. When the query message is placed in the corresponding query message queue, a message number may be set for the query message to identify the sequence in which the query message enters in the query message queue. The message queue is refreshed. Multiple processors share a FIFO buffer queue. Only one cache queue is set. Because the CPU refreshes the TCAM entries in the actual application, the frequency is usually refreshed when the user configuration changes. The data structure of the refresh message includes refreshing the message number, refreshing the type of the entry, and refreshing the content. The length of the cache queue is the integer power of 2, and the location of the message in the cache queue can be found directly by using the query message number or the lower address of the refresh message number. For example, the length of the cache queue is 32, which is the 5th power of 2, and the message number is taken. The binary number is 5 bits lower as its position in the buffer queue. For example, the message number is 57, its binary number is 111001, the lower 5 bits are 11001, and the decimal is 25, then the message enters the position of the buffer queue 25. The above method can be applied to a multi-core processor or a plurality of processors using FPGA to perform TCAM query and refresh processing. Since two branches are set on the FPGA, that is, the query processing branch and the refresh processing branch, the two branches are separately processed. Do not interfere with each other, and solve the problem caused by the refresh priority being higher than the query priority. The problem of slow response is able to provide high-speed table lookup forwarding and table item refreshing, which enables fast forwarding, improves the throughput of network devices, and improves the performance of network devices. Embodiment 2 This embodiment provides a method for scheduling a TCAM query and a refresh message. The method is described as an example on the network device shown in FIG. 4. The network device shown in FIG. 4 includes the following functional units:
1 )处理器单元, 通过查询通道与 FPGA相连, 其内部包括多个处理器, 分别用处 理器 1、 处理器 2、 ... ...、 处理器 n表示, 多个单核处理器或者多个处理器可以同时发 出对不同表项的 TCAM查询请求。查询通道负责传递从各个处理器发出的查询请求以 及从 FPGA返回的查询结果, 处理器根据该查询的结果, 访问存储在处理器外设中的 表项, 获取报文转发所需的信息, 以实现报文的转发。 1) The processor unit is connected to the FPGA through the query channel, and includes a plurality of processors therein, respectively represented by the processor 1, the processor 2, the processor n, and the plurality of single-core processors or Multiple processors can simultaneously issue TCAM query requests for different entries. The query channel is responsible for transmitting the query request sent from each processor and the query result returned from the FPGA, and the processor accesses the entry stored in the processor peripheral according to the result of the query, and obtains information required for packet forwarding, Implement packet forwarding.
2) CPU, 通过刷新通道与 FPGA相连, 还通过 FPGA对 TCAM中表项条目进行 增加、 删除、 更新操作, 同时对处理器外设中表项进行相应修改。 2) The CPU is connected to the FPGA through the refresh channel, and also adds, deletes, and updates the entries in the TCAM through the FPGA, and simultaneously modifies the entries in the processor peripheral.
3 ) 处理器外设, 包括 SRAM ( Static Random Access Memory, 静态随机存储器)、 DRAM (Dynamic Random Access Memory, 动态随机存储器) 以及 DDR (Double Data Rate, 双倍数据传输速率存储器) 等外设, 上述处理器查询 TCAM得到的结果是一个 指向存储在外设中的具体表项地址的指针或者索引, 根据该指针或索引处理器从外设 中读取相应表项信息。 CPU对 TCAM表项条目进行更新操作的同时,对存储在外设中 相应表项进行相应的修改。 3) processor peripherals, including SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and DDR (Double Data Rate) The result of the processor querying the TCAM is a pointer or index to the address of a particular entry stored in the peripheral, from which the corresponding entry information is read from the peripheral. When the CPU updates the TCAM entry, it also modifies the corresponding entry stored in the peripheral.
4) FPGA, 包括查询处理单元和刷新处理单元, 其分别响应处理器和 CPU发出的 查询和刷新请求, 查询处理和刷新处理独立运作, 在 CPU刷新 TCAM表项条目的同 时, 处理器仍然可以进行 TCAM查询。 其中, 查询处理单元根据单核处理器的个数设置有多个 FIFO查询缓存队列 (对 应于实施例 1中的查询消息队列), 每个处理器对应一个 FIFO队列, 队列之间采用轮 询调度的原则。 查询处理单元根据处理器编号将查询消息分发到对应的查询队列中。 刷新处理单元的功能在于快速响应 CPU的更新命令, 对 TCAM中表项进行更新。 4) The FPGA, including the query processing unit and the refresh processing unit, respectively respond to the query and refresh request issued by the processor and the CPU, and the query processing and the refresh processing operate independently. When the CPU refreshes the TCAM entry, the processor can still perform TCAM query. The query processing unit sets a plurality of FIFO query cache queues according to the number of single core processors (corresponding to the query message queue in Embodiment 1), each processor corresponds to one FIFO queue, and polling scheduling is used between the queues. the rules. The query processing unit distributes the query message to the corresponding query queue according to the processor number. The function of the refresh processing unit is to quickly respond to the CPU update command and update the entries in the TCAM.
5 ) TCAM单元, 用于响应 CPU通过 FPGA发来的刷新消息, 更新表项条目; 以 及用于响应处理器通过 FPGA发来的查询消息, 并返回查询结果。 基于图 4所示的网络设备,本实施例提供了一种查询消息入队和出队调度的方法, 本实施例的 FPGA的查询处理单元维护一个队列状态向量, 队列状态向量是一个二进 制的数值,相应位置 1表示该队列有消息,例如:队列总数为 8,队列状态向量 00001001 表示队列 1和队列 4中有消息需要出队, 而其他 6个队列中没有消息需要出队, 参见 图 5, 该方法包括以下步骤: 步骤 S502, 查询消息入队, 具体为: 查询处理单元接收到查询消息后, 根据查询 消息的处理器编号分别入队, 对查询消息编号, 根据入队的队列号将队列状态向量的 相应位置 1, 表示该队列有消息需要出队; 步骤 S504, 查询处理单元对各队列采用轮询的方式循环调度, 每轮调度一个队列 中的一个查询消息; 具体如下: 步骤 1, 初始化调度队列号为 n=l, 从第一个队列开始调度; 步骤 2, 如果 n大于队列总数, 则设置 n=l, 即最后一个队列执行了调度后, 再从 第一个队列开始循环调度; 否则, 本轮调度队列为 n; 步骤 3, 判断队列状态向量的相应位是否置 1, 若置 1, 表示该队列有查询消息需 要出队, 则执行步骤 4; 否则, 表示该队列没有查询消息需要调度, 执行下一个队列 的调度, 即执行步骤 5; 步骤 4, 队列内部的调度, 根据查询消息编号顺序出队, 如果该队列所有查询消 息都被调度出去, 将该队列对应的队列向量中的位清 0。 步骤 5, n=n+l , 执行下一个队列的调度, 跳到步骤 3。 参见图 6所示的查询消息入队和出队调度的示意图, FPGA根据查询消息的处理 器编号将查询消息放入对应的队列,采用轮询方式对各队列的查询消息进行出队处理。 基于图 4所示的网络设备,图 7示出了根据本实施例的一种 TCAM表项查询方法 的流程图, 该方法包括以下步骤: 步骤 S702, 处理器 1、 处理器 2、 ... ...、 处理器 n根据需要发出查询消息, 查询 消息中包括处理器编号、 所查询表项的类型、 查询内容的大小、 查询内容, 查询消息 通过查询通道传送给 FPGA; 步骤 S704, FPGA识别出查询消息, 将查询消息按处理器编号入队; FPGA 的查询处理单元维护多个查询消息队列, 每个处理器对应一个队列, 根据 处理器编号将查询消息入队, 按照轮询的方式对多个查询消息队列进行调度, 每个队 列内部按 FIFO的原则调度, 进行 TCAM查询, 并将查询结果返回给请求的处理器。 步骤 S706, 查询处理单元将查询消息出队, 进入 TCAM查询, 并将查询的结果 按处理器编号返回给相应的处理器; 步骤 S708, 处理器根据 TCAM查询的结果, 即表项信息在处理器外设中的地址, 读取表项的具体内容; 步骤 S710, 处理器根据查询到表项的内容进行报文转发。 基于图 4所示的网络设备,图 8示出了根据本实施例的一种 CPU对表项刷新的方 法流程图, 该方法包括以下步骤: 步骤 S802, CPU发出表项刷新消息,刷新消息中包括表项的类型以及刷新的内容, 刷新消息通过刷新通道传送给 FPGA; 步骤 S804, FPGA识别出刷新消息, 将刷新消息入队; 步骤 S806, 按照先进先出的原则将刷新消息调度出队; 步骤 S808, TCAM收到刷新消息则将表项条目进行更新, 包括增添、 删除、 修改 操作; 步骤 S810, CPU对处理器外设表项中的条目进行更新, 包括增添、 删除、 修改操 作。 上述方法中的处理器用 FPGA中转进行 TCAM的查询访问, FPGA返回指向外设 中表项地址的指针或者索引, 处理器根据返回的结果, 读取处理器外设中的表项; 另 夕卜, CPU用 FPGA中转进行 TCAM的刷新操作(即 TCAM表项条目的更新操作), 同 时更新处理器外设中的相应表项信息。 本实施例提供的调度方法支持查询和刷新操作的并行处理, 在查询的同时可以进 行表项条目的更新, 表项条目更新的同时也可以进行查询。 同时, 上述方法采用在 FPGA上设置与处理器个数对应的查询消息队列, 能够解决相关技术只支持单个处理 器的查询, 对多个处理器或者多线程的并行查询以及多种表项的查询处理能力不足的 问题, 如果有多个单核处理器并行查询多种表项, 查找的效率将比较高; 且因采用轮 询调度的方式, 各个单核处理器的性能也比较均衡。 实施例 3 图 9示出了根据本发明实施例的一种 FPGA装置的结构框图, 该装置包括: 查询消息入队模块 92, 设置为收到查询消息后, 将查询消息放入查询消息队列; 刷新消息入队模块 94, 设置为收到刷新消息后, 将刷新消息放入刷新消息队列; 查询调度模块 96, 与查询消息入队模块 92相连, 设置为对查询消息队列中的查 询消息进行调度; 刷新调度模块 98, 与刷新消息入队模块 94相连, 设置为对刷新消息队列中的刷 新消息进行调度。 查询消息入队模块 92包括: 队列确定单元, 设置为接收到查询消息后, 根据查询 消息携带的处理器编号确定对应的查询消息队列; 其中, FPGA装置上设置有多个查 询消息队列, 且查询消息队列与处理器一一对应; 入队单元, 设置为将查询消息放入 队列确定单元确定的查询消息队列中; 查询调度模块 96包括: 轮询调度单元, 设置为采用轮询方式调度多个查询消息队 列; 出队单元, 设置为对轮询调度单元调度的查询消息队列中的查询消息进行出队列 处理。 优选地, 出队单元包括: 出队子单元, 设置为采用先进先出 FIFO 的方式对上述 轮询调度单元调度的查询消息队列中的查询消息进行出队列处理。 刷新调度模块 98包括: 刷新调度单元, 设置为采用先进先出 FIFO的方式对刷新 消息队列中的刷新消息进行调度。 其中, 上述查询消息包括: 处理器编号、 所查询表项类型、 查询内容的大小以及 查询内容。处理器编号用于确定查询消息所入的队列号, 以及查询结果返回的处理器; 表项类型标识是何种表项的查询, 是 ACL或者路由, 还是其他表项; 查询内容的大小 表示是多少位的查询, 比如 144/256; 查询内容是输入查找的条件, 比如查路由输入的 内容是目的 IP, ACL查找输入的内容是报文的 IP五元组, 该 IP五元组包括源 IP址, 目的 IP地址, 源端口号, 目的端口号, 以及协议类型。 在将查询消息放入对应的查询 消息队列时, 可以为该查询消息设置消息编号, 以标识该查询消息队列中查询消息进 入的先后顺序。 本实施例的多个处理器共用一个 FIFO缓存队列, 本实施例只设一个刷新消息队 列是由于在实际应用中 CPU对 TCAM条目的刷新操作频率较低, 一般是在用户配置 更改的情况下才刷新表项。刷新消息的数据结构包括刷新消息编号、刷新条目的类型、 刷新内容。 上述 FPGA装置可以应用于多核处理器或者多个处理器用 FPGA中转进行 TCAM 查询及刷新处理中, 由于 FPGA装置上设置两个分支, 即查询处理分支和刷新处理分 支, 对两个分支采用单独进行处理, 互不干扰, 解决了因刷新优先级高于查询优先级 引起的查询响应较慢的问题, 能够提供高速的查表转发和表项刷新, 实现快速转发, 提升了网络设备的吞吐能力, 进而提高了网络设备的性能。 实施例 4 图 10 示出了根据本发明实施例的一种网络设备的结构框图, 该网络设备包括 FPGA装置 102、处理器 104和 CPU 106, FPGA装置 102分别与处理器 104和 CPU 106 相连, 其中, FPGA装置 102可以按照实施例 3中的方式实现, 这里不再详述。 处理器 104, 设置为向 FPGA装置 102发送查询消息, 以及接收所述 FPGA装置 返回的查询结果, 根据该查询结果获取路由信息, 根据该路由信息进行报文转发; 5) TCAM unit, configured to respond to a refresh message sent by the CPU through the FPGA, update the entry of the entry; and respond to the query message sent by the processor through the FPGA, and return the query result. Based on the network device shown in FIG. 4, this embodiment provides a method for querying message enqueue and dequeue scheduling. The query processing unit of the FPGA in this embodiment maintains a queue state vector, and the queue state vector is a binary. The value of the system, the corresponding position 1 indicates that the queue has messages, for example: the total number of queues is 8, the queue status vector 00001001 indicates that there are messages in queue 1 and queue 4 that need to be dequeued, and there are no messages in the other six queues that need to be dequeued, see 5, the method includes the following steps: Step S502, querying a message into a team, specifically: after receiving the query message, the query processing unit separately enqueues according to the processor number of the query message, and numbers the query message according to the queue of the enqueue The corresponding position of the queue state vector is 1, indicating that the queue has a message to be dequeued; in step S504, the query processing unit cyclically schedules each queue by polling, and schedules one query message in one queue per round; Step 1: Initialize the scheduling queue number to n=l, and start scheduling from the first queue. Step 2: If n is greater than the total number of queues, set n=l, that is, after the last queue performs scheduling, and then from the first queue. Start the round-robin scheduling; otherwise, the current round of the scheduling queue is n; Step 3, determine whether the corresponding bit of the queue status vector is set to 1, if 1. If the queue has a query message that needs to be dequeued, go to step 4. Otherwise, it means that the queue does not have a query message to be scheduled, and the next queue is scheduled to be executed, that is, step 5 is performed; Step 4, the internal scheduling of the queue, according to the query The message number is dequeued in sequence. If all the query messages of the queue are scheduled, the bits in the queue vector corresponding to the queue are cleared to 0. Step 5, n=n+l, execute the scheduling of the next queue, and skip to step 3. Referring to the schematic diagram of the query message enqueue and dequeue scheduling shown in FIG. 6, the FPGA puts the query message into the corresponding queue according to the processor number of the query message, and uses the polling manner to dequeue the query messages of each queue. Based on the network device shown in FIG. 4, FIG. 7 shows a flowchart of a method for querying a TCAM entry according to this embodiment, the method comprising the following steps: Step S702, processor 1, processor 2, ... The processor n issues a query message according to the need, the query message includes the processor number, the type of the queryed item, the size of the query content, and the query content, and the query message is transmitted to the FPGA through the query channel; Step S704, the FPGA identifies The query message is sent, and the query message is entered into the queue by the processor number; The query processing unit of the FPGA maintains multiple query message queues, each processor corresponds to one queue, and the query message is queued according to the processor number, and multiple query message queues are scheduled according to the polling manner, and each queue is internally FIFO-based. The principle is scheduled, the TCAM query is performed, and the query result is returned to the requesting processor. Step S706, the query processing unit dequeues the query message, enters the TCAM query, and returns the result of the query to the corresponding processor according to the processor number. Step S708, the processor according to the result of the TCAM query, that is, the entry information in the processor The address in the peripheral device reads the specific content of the entry. In step S710, the processor forwards the packet according to the content of the query entry. Based on the network device shown in FIG. 4, FIG. 8 is a flowchart of a method for a CPU to refresh an entry according to the embodiment. The method includes the following steps: Step S802, the CPU issues an entry refresh message, and refreshes the message. Including the type of the entry and the refreshed content, the refresh message is transmitted to the FPGA through the refresh channel; Step S804, the FPGA recognizes the refresh message, and the refresh message is queued; Step S806, the refresh message is scheduled to be dequeued according to the principle of first in first out; Step S808, the TCAM receives the refresh message to update the entry of the entry, including the adding, deleting, and modifying operations. Step S810, the CPU updates the entry in the processor peripheral entry, including the adding, deleting, and modifying operations. The processor in the above method uses the FPGA relay to perform the query access of the TCAM, and the FPGA returns a pointer or an index pointing to the address of the table entry in the peripheral, and the processor reads the entry in the processor peripheral according to the returned result; The CPU uses the FPGA relay to perform the TCAM refresh operation (ie, the update operation of the TCAM entry), and simultaneously updates the corresponding entry information in the processor peripheral. The scheduling method provided in this embodiment supports the parallel processing of the query and the refresh operation, and the entry of the entry can be performed at the same time as the query, and the entry of the entry can be queried at the same time. At the same time, the above method adopts a query message queue corresponding to the number of processors on the FPGA, which can solve the related technology that only supports single processor query, parallel query of multiple processors or multiple threads, and query of multiple entries. If the processing power is insufficient, if multiple single-core processors query multiple entries in parallel, the efficiency of the search will be higher; and because of the polling scheduling mode, the performance of each single-core processor is relatively balanced. Embodiment 3 FIG. 9 is a structural block diagram of an FPGA device according to an embodiment of the present invention. The device includes: a query message enqueue module 92, configured to place a query message into a query message queue after receiving a query message; The refresh message enqueue module 94 is configured to: after receiving the refresh message, put the refresh message into the refresh message queue; the query scheduling module 96 is connected to the query message enqueue module 92, and is configured to schedule the query message in the query message queue. The refresh scheduling module 98 is coupled to the refresh message enqueue module 94 and configured to schedule refresh messages in the refresh message queue. The query message enqueue module 92 includes: a queue determining unit, configured to: after receiving the query message, determine a corresponding query message queue according to the processor number carried in the query message; wherein the FPGA device is configured with multiple query message queues, and the query The message queue is in one-to-one correspondence with the processor; the enqueue unit is configured to put the query message into the query message queue determined by the queue determining unit; the query scheduling module 96 includes: a polling scheduling unit, configured to schedule multiple by using a polling manner Query message queue; the dequeuing unit is set to perform queue processing on the query message in the query message queue scheduled by the polling scheduling unit. Preferably, the dequeuing unit comprises: a dequeue subunit, configured to perform a queue out processing on the query message in the query message queue scheduled by the polling scheduling unit by using a first in first out FIFO. The refresh scheduling module 98 includes: a refresh scheduling unit configured to schedule a refresh message in the refresh message queue by using a first in first out FIFO. The query message includes: a processor number, a type of the queryed item, a size of the query content, and a query content. The processor number is used to determine the queue number to which the query message is entered, and the processor to which the query result is returned; the entry type identifier is the query of the entry, whether it is an ACL or a route, or other entries; How many queries, such as 144/256; The query content is the condition for input search. For example, the content of the route input is the destination IP, and the content of the ACL search input is the IP quintuple of the packet. The IP quintuple includes the source IP. Address, destination IP address, source port number, destination port number, and protocol type. When the query message is placed in the corresponding query message queue, a message number may be set for the query message to identify the sequence in which the query message enters in the query message queue. The multiple processors in this embodiment share a FIFO buffer queue. In this embodiment, only one refresh message queue is set because the CPU performs a refresh operation on the TCAM entry in a practical application, which is generally caused by a user configuration change. Refresh the entry. The data structure of the refresh message includes refreshing the message number, refreshing the type of the entry, and refreshing the content. The above FPGA device can be applied to a multi-core processor or a plurality of processors for performing TCAM query and refresh processing in an FPGA relay. Since two branches are set on the FPGA device, that is, a query processing branch and a refresh processing branch, the two branches are separately processed. It does not interfere with each other, and solves the problem that the query response is slow due to the refresh priority being higher than the query priority. It can provide high-speed table lookup forwarding and table item refreshing, realize fast forwarding, and improve the throughput capability of network devices. Improve the performance of network devices. Embodiment 4 FIG. 10 is a block diagram showing the structure of a network device including an FPGA device 102, a processor 104, and a CPU 106. The FPGA device 102 is connected to the processor 104 and the CPU 106, respectively, according to an embodiment of the present invention. The FPGA device 102 can be implemented in the manner of Embodiment 3, and will not be described in detail herein. The processor 104 is configured to send a query message to the FPGA device 102, and receive a query result returned by the FPGA device, obtain routing information according to the query result, and perform message forwarding according to the routing information;
CPU 106, 设置为向 FPGA装置 102发送刷新消息, 其中, 该刷新消息携带有对 调度 TCAM进行刷新操作的指示信息。 本实施例的网络设备还可以按照实施例 2中图 4所示的网络设备实现, 具体功能 与其相同, 这里不再赘述。 本实施例的网络设备通过在 FPGA装置上设置两个分支, 即查询处理分支和刷新 处理分支, 对两个分支采用单独进行处理, 互不干扰, 解决了因刷新优先级高于查询 优先级引起的查询响应较慢的问题, 能够提供高速的查表转发和表项刷新, 实现快速 转发, 提升了网络设备的吞吐能力, 进而提高网络设备的性能。 与现有技术相比较, 以上实施例提供的技术, 通过对查询和刷新的分开处理, 使 得查询和刷新的处理互不干扰, 提高了查询与刷新的效率; 将不同处理器的查询消息 分开入队, 实现了并行查询, 各个单核处理器的查询队列轮询调度, 使得每个单核处 理器的性能均衡。并能够快速响应处理器对 TCAM的查询以及表项的刷新, 实现快速 转发, 提高了网络设备吞吐能力, 进而提升了网络设备的性能。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可以用通用 的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布在多个计算装置所 组成的网络上, 可选地, 它们可以用计算装置可执行的程序代码来实现, 从而, 可以 将它们存储在存储装置中由计算装置来执行, 并且在某些情况下, 可以以不同于此处 的顺序执行所示出或描述的步骤, 或者将它们分别制作成各个集成电路模块, 或者将 它们中的多个模块或步骤制作成单个集成电路模块来实现。 这样, 本发明不限制于任 何特定的硬件和软件结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本领域的技 术人员来说, 本发明可以有各种更改和变化。 凡在本发明的精神和原则之内, 所作的 任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。 The CPU 106 is configured to send a refresh message to the FPGA device 102, where the refresh message carries indication information for performing a refresh operation on the scheduling TCAM. The network device in this embodiment can also be implemented according to the network device shown in FIG. 4 in Embodiment 2. The specific functions are the same as those in the present embodiment, and details are not described herein again. The network device in this embodiment sets two branches on the FPGA device, that is, the query processing branch and the refresh processing branch, and separately processes the two branches without mutual interference, thereby solving the problem that the refresh priority is higher than the query priority. The slow response of the query can provide high-speed table lookup forwarding and table item refreshing, enabling fast forwarding, improving the throughput of network devices, and improving the performance of network devices. Compared with the prior art, the technology provided by the foregoing embodiment, by separately processing the query and the refresh, makes the processing of the query and the refresh do not interfere with each other, and improves the efficiency of the query and the refresh; separately separates the query messages of different processors into The team implements parallel query, query queue polling scheduling of each single core processor, and balances the performance of each single core processor. It can quickly respond to the processor's query to TCAM and refresh the entries, achieve fast forwarding, improve the throughput of network devices, and improve the performance of network devices. Obviously, those skilled in the art should understand that the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein. The steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps are fabricated as a single integrated circuit module. Thus, the invention is not limited to any specific combination of hardware and software. The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Claims

权 利 要 求 书 Claim
1. 一种调度三态内容寻址存储器 TCAM查询和刷新消息的方法, 包括: A method for scheduling a three-state content addressing memory TCAM query and refresh message, comprising:
现场可编程门阵列 FPGA收到查询消息后, 将所述查询消息放入查询消息 队列;  After receiving the query message, the field programmable gate array FPGA puts the query message into the query message queue;
所述 FPGA收到刷新消息后, 将所述刷新消息放入刷新消息队列; 所述 FPGA分别对所述查询消息队列中的查询消息和所述刷新消息队列中 的刷新消息进行调度。  After receiving the refresh message, the FPGA puts the refresh message into the refresh message queue; the FPGA separately schedules the query message in the query message queue and the refresh message in the refresh message queue.
2. 根据权利要求 1所述的方法, 其中, 所述 FPGA上设置有多个查询消息队列, 且所述查询消息队列与处理器一一对应; 2. The method according to claim 1, wherein the FPGA is provided with a plurality of query message queues, and the query message queues are in one-to-one correspondence with the processor;
所述 FPGA将所述查询消息放入查询消息队列包括: 所述 FPGA将所述查 询消息放入所述查询消息携带的处理器编号对应的查询消息队列中;  The loading, by the FPGA, the query message into the query message queue includes: the FPGA placing the query message into a query message queue corresponding to a processor number carried by the query message;
所述 FPGA对所述查询消息队列中的查询消息进行调度包括: 所述 FPGA 采用轮询方式调度所述多个查询消息队列, 对被调度的查询消息队列中的查询 消息进行出队列处理。  The scheduling of the query message in the query message queue by the FPGA includes: the FPGA scheduling the multiple query message queues in a polling manner, and performing queue processing on the query messages in the scheduled query message queue.
3. 根据权利要求 2所述的方法, 其中, 所述 FPGA对被调度的查询消息队列中的 查询消息进行出队列处理包括: 3. The method according to claim 2, wherein the out-of-queue processing of the query message in the scheduled query message queue by the FPGA comprises:
所述 FPGA采用先进先出 FIFO的方式对所述被调度的查询消息队列中的 查询消息进行出队列处理。  The FPGA performs queued processing on the query message in the scheduled query message queue by using a first-in-first-out FIFO.
4. 根据权利要求 1所述的方法, 其中, 所述 FPGA对所述刷新消息队列中的刷新 消息进行调度包括: 4. The method according to claim 1, wherein the scheduling of the refresh message in the refresh message queue by the FPGA comprises:
所述 FPGA采用先进先出 FIFO的方式对所述刷新消息队列中的刷新消息 进行调度。  The FPGA schedules the refresh message in the refresh message queue by using a first-in-first-out FIFO.
5. 根据权利要求 1-4任一项所述的方法, 其中, 所述 FPGA对所述查询消息队列 中的查询消息进行调度之后, 所述方法还包括: The method according to any one of claims 1-4, wherein, after the FPGA schedules the query message in the query message queue, the method further includes:
所述 FPGA接收所述查询消息的查询结果, 将所述查询结果返回给所述查 询消息对应的处理器;  Receiving, by the FPGA, a query result of the query message, and returning the query result to a processor corresponding to the query message;
所述处理器根据所述查询结果获取路由信息,根据所述路由信息转发报文。 一种现场可编程门阵列 FPGA装置, 包括: The processor obtains routing information according to the query result, and forwards the packet according to the routing information. A field programmable gate array FPGA device, comprising:
查询消息入队模块, 设置为收到查询消息后, 将所述查询消息放入查询消 息队列;  Querying the message enqueue module, and setting the query message to the query message queue after receiving the query message;
刷新消息入队模块, 设置为收到刷新消息后, 将所述刷新消息放入刷新消 息队列;  Refreshing the message enqueue module, and setting the refresh message to the refresh message queue after receiving the refresh message;
查询调度模块, 设置为对所述查询消息队列中的查询消息进行调度; 刷新调度模块, 设置为对所述刷新消息队列中的刷新消息进行调度。 根据权利要求 6所述的装置, 其中,  The query scheduling module is configured to schedule the query message in the query message queue; and the refresh scheduling module is configured to schedule the refresh message in the refresh message queue. The apparatus according to claim 6, wherein
所述查询消息入队模块包括: 队列确定单元, 设置为接收到所述查询消息 后, 根据所述查询消息携带的处理器编号确定对应的查询消息队列; 其中, 所 述 FPGA装置上设置有多个查询消息队列, 且所述查询消息队列与处理器一一 对应; 入队单元, 设置为将所述查询消息放入所述队列确定单元确定的所述查 询消息队列中;  The query message enqueue module includes: a queue determining unit, configured to: after receiving the query message, determine a corresponding query message queue according to a processor number carried in the query message; wherein, the FPGA device is configured Query message queues, and the query message queues are in one-to-one correspondence with the processor; the enqueue unit is configured to put the query message into the query message queue determined by the queue determining unit;
所述查询调度模块包括: 轮询调度单元, 设置为采用轮询方式调度所述多 个查询消息队列; 出队单元, 设置为对所述轮询调度单元调度的所述查询消息 队列中的查询消息进行出队列处理。 根据权利要求 7所述的装置, 其中, 所述出队单元包括:  The query scheduling module includes: a polling scheduling unit configured to schedule the plurality of query message queues by using a polling manner; and a dequeuing unit configured to query the query message queues scheduled by the polling scheduling unit The message is queued. The apparatus according to claim 7, wherein the dequeuing unit comprises:
出队子单元, 设置为采用先进先出 FIFO 的方式对所述轮询调度单元调度 的所述查询消息队列中的查询消息进行出队列处理。 根据权利要求 6所述的装置, 其中, 所述刷新调度模块包括:  The dequeue sub-unit is configured to perform a queue-out processing on the query message in the query message queue scheduled by the polling scheduling unit by using a first-in-first-out FIFO. The device according to claim 6, wherein the refresh scheduling module comprises:
刷新调度单元, 设置为采用先进先出 FIFO 的方式对所述刷新消息队列中 的刷新消息进行调度。 一种网络设备,包括权利要求 6-9任一项所述的现场可编程门阵列 FPGA装置, 所述网络设备还包括:  The refresh scheduling unit is configured to schedule the refresh message in the refresh message queue by using a first in first out FIFO. A network device, comprising the field programmable gate array FPGA device of any one of claims 6-9, the network device further comprising:
处理器, 设置为向所述 FPGA装置发送查询消息, 以及接收所述 FPGA装 置返回的查询结果, 根据所述查询结果获取路由信息, 根据所述路由信息进行 报文转发;  a processor, configured to send a query message to the FPGA device, and receive a query result returned by the FPGA device, obtain routing information according to the query result, and forward the message according to the routing information;
CPU, 设置为向所述 FPGA装置发送刷新消息, 所述刷新消息携带有对调 度三态内容寻址存储器 TCAM进行刷新操作的指示信息。  The CPU is configured to send a refresh message to the FPGA device, the refresh message carrying indication information for performing a refresh operation on the tempered tri-state content addressed memory TCAM.
PCT/CN2011/080616 2010-10-29 2011-10-10 Method and device for dispatching tcam (telecommunication access method) query and refreshing messages WO2012055319A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010526538.8A CN101986271B (en) 2010-10-29 2010-10-29 Method and device for dispatching TCAM (telecommunication access method) query and refresh messages
CN201010526538.8 2010-10-29

Publications (1)

Publication Number Publication Date
WO2012055319A1 true WO2012055319A1 (en) 2012-05-03

Family

ID=43710620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/080616 WO2012055319A1 (en) 2010-10-29 2011-10-10 Method and device for dispatching tcam (telecommunication access method) query and refreshing messages

Country Status (2)

Country Link
CN (1) CN101986271B (en)
WO (1) WO2012055319A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015149015A1 (en) * 2014-03-28 2015-10-01 Caradigm Usa Llc Methods, apparatuses and computer program products for providing a speed table for analytical models

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986271B (en) * 2010-10-29 2014-11-05 中兴通讯股份有限公司 Method and device for dispatching TCAM (telecommunication access method) query and refresh messages
CN102662888A (en) * 2012-03-20 2012-09-12 大连梯耐德网络技术有限公司 System for controlling multi-user parallel operation of TCAM, and control method thereof
CN102880680B (en) * 2012-09-11 2015-08-12 大连梯耐德网络技术有限公司 A kind of multi-user's statistical method based on random access storage device
CN103023782B (en) * 2012-11-22 2016-05-04 北京星网锐捷网络技术有限公司 A kind of method and device of accessing three-state content addressing memory
CN104239337B (en) * 2013-06-19 2019-03-26 中兴通讯股份有限公司 Processing method and processing device of tabling look-up based on TCAM
CN105791125B (en) * 2014-12-26 2020-03-17 中兴通讯股份有限公司 Method and device for writing data in ternary content addressable memory
CN105791163B (en) * 2014-12-26 2019-09-24 南京中兴软件有限责任公司 Update processing method and processing device
CN106302174A (en) * 2015-06-12 2017-01-04 中兴通讯股份有限公司 A kind of method and device realizing route querying
CN107301353B (en) * 2017-06-27 2020-06-09 徐萍 Streaming intensive data desensitization method and data desensitization equipment thereof
CN114356418B (en) * 2022-03-10 2022-08-05 之江实验室 Intelligent table entry controller and control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1631008A (en) * 2001-07-13 2005-06-22 艾利森公司 Method and apparatus for scheduling message processing
CN1798088A (en) * 2004-12-30 2006-07-05 中兴通讯股份有限公司 Dispatching method and equipment for searching and updating routes based on FPGA
CN101866357A (en) * 2010-06-11 2010-10-20 福建星网锐捷网络有限公司 Method and device for updating items of three-state content addressing memory
CN101986271A (en) * 2010-10-29 2011-03-16 中兴通讯股份有限公司 Method and device for dispatching TCAM (telecommunication access method) query and refresh messages

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1327674C (en) * 2005-02-25 2007-07-18 清华大学 Double stack compatible router searching device supporting access control listing function on core routers
CN101840374B (en) * 2010-04-28 2012-06-27 福建星网锐捷网络有限公司 Processing device, information searching system and information searching method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1631008A (en) * 2001-07-13 2005-06-22 艾利森公司 Method and apparatus for scheduling message processing
CN1798088A (en) * 2004-12-30 2006-07-05 中兴通讯股份有限公司 Dispatching method and equipment for searching and updating routes based on FPGA
CN101866357A (en) * 2010-06-11 2010-10-20 福建星网锐捷网络有限公司 Method and device for updating items of three-state content addressing memory
CN101986271A (en) * 2010-10-29 2011-03-16 中兴通讯股份有限公司 Method and device for dispatching TCAM (telecommunication access method) query and refresh messages

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015149015A1 (en) * 2014-03-28 2015-10-01 Caradigm Usa Llc Methods, apparatuses and computer program products for providing a speed table for analytical models

Also Published As

Publication number Publication date
CN101986271B (en) 2014-11-05
CN101986271A (en) 2011-03-16

Similar Documents

Publication Publication Date Title
WO2012055319A1 (en) Method and device for dispatching tcam (telecommunication access method) query and refreshing messages
US11882025B2 (en) System and method for facilitating efficient message matching in a network interface controller (NIC)
US10552205B2 (en) Work conserving, load balancing, and scheduling
US7870306B2 (en) Shared memory message switch and cache
US7443836B2 (en) Processing a data packet
US7149226B2 (en) Processing data packets
KR102082020B1 (en) Method and apparatus for using multiple linked memory lists
US8656071B1 (en) System and method for routing a data message through a message network
US7158964B2 (en) Queue management
US7546399B2 (en) Store and forward device utilizing cache to store status information for active queues
US8972630B1 (en) Transactional memory that supports a put with low priority ring command
US7096277B2 (en) Distributed lookup based on packet contents
CN108476208A (en) Multi-path transmission designs
US20150341473A1 (en) Packet flow classification
US10397144B2 (en) Receive buffer architecture method and apparatus
US20150089095A1 (en) Transactional memory that supports put and get ring commands
US7433364B2 (en) Method for optimizing queuing performance
US7336606B2 (en) Circular link list scheduling
US9342313B2 (en) Transactional memory that supports a get from one of a set of rings command
US9996468B1 (en) Scalable dynamic memory management in a network device
US20220217085A1 (en) Server fabric adapter for i/o scaling of heterogeneous and accelerated compute systems
US20060140203A1 (en) System and method for packet queuing
US7603539B2 (en) Systems and methods for multi-frame control blocks
JP2016139978A (en) Packet processing system, communication system, packet processor, packet processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11835592

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11835592

Country of ref document: EP

Kind code of ref document: A1