WO2014094569A1 - 一种ram、网络处理系统和一种ram查表方法 - Google Patents

一种ram、网络处理系统和一种ram查表方法 Download PDF

Info

Publication number
WO2014094569A1
WO2014094569A1 PCT/CN2013/089238 CN2013089238W WO2014094569A1 WO 2014094569 A1 WO2014094569 A1 WO 2014094569A1 CN 2013089238 W CN2013089238 W CN 2013089238W WO 2014094569 A1 WO2014094569 A1 WO 2014094569A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual memory
ram
service table
service
access message
Prior art date
Application number
PCT/CN2013/089238
Other languages
English (en)
French (fr)
Inventor
姜海明
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US14/653,506 priority Critical patent/US20150350076A1/en
Priority to EP13864996.7A priority patent/EP2937793B1/en
Priority to RU2015127508A priority patent/RU2642358C2/ru
Publication of WO2014094569A1 publication Critical patent/WO2014094569A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5054Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a RAM, a network processing system, and a RAM lookup table method.
  • BACKGROUND OF THE INVENTION nowadays, the development speed of the network is amazing, the growth of network traffic and the emergence of new services require network equipment to have wire speed and flexible processing capability. With its high-speed processing and flexible programmability, network processors have become an effective solution for data processing in today's networks. However, the current network processor forwarding rate is extremely amazing. At present, the mainstream network processor reaches 100 Gbps (packet rate 150 Mpps). Obviously, the RAM (random access memory) interface bandwidth growth rate is far behind the network processor.
  • Embodiments of the present invention provide a RAM, a network processing system, and a RAM lookup table method to solve the problem of low efficiency of RAM lookup table.
  • a RAM lookup method includes the following steps: The network processor receives a service table access message from each physical interface; the network processor parses the service table access message, Acquiring the service table identification information accessed by the service table access message; assigning a virtual memory library address to the service table access message according to the service table identification information; the virtual memory library address is: at least two of the RAM division a virtual memory library, one of which corresponds to a virtual memory library address of the business table to be searched; in at least two virtual memory banks of the RAM, the same business table is stored in at least two of the virtual memory
  • the library accesses the corresponding virtual memory library according to the virtual memory library address, and searches for a corresponding business table.
  • the specific process of allocating the virtual memory library address information is: the network processor queries the virtual memory library containing the service table to be accessed according to the service table identification information of the service table access message, and selects the current A virtual memory library with the lowest traffic, generating virtual memory library address information.
  • the specific process of allocating the virtual memory library address information is: the network processor accesses the service table identification information of the packet according to the service table, and determines, by using a hash operation, the corresponding service table access message Virtual memory bank address.
  • the method further includes: constructing a table lookup key value according to the obtained virtual memory library address, according to the key value in the corresponding Find the corresponding business table in the virtual memory library.
  • the RAM comprises: one of SRAM, TCAM and SDRAM.
  • the embodiment of the present invention further provides a RAM, where the RAM includes at least two virtual memory banks, and the same service table is stored in at least two of the virtual memory banks. Preferably, among the at least two virtual memory banks of the RAM, all of the virtual memory banks store the same service table.
  • the embodiment of the present invention further provides a network processing system, including a network processor and a RAM;
  • the network processor includes a receiving module, a parsing module, an allocating module, a searching module, and a processing module; and the receiving module is configured to receive from each physics a service table access message of the interface;
  • the parsing module is configured to parse the service table access message received by the receiving module, and obtain the service table identification information accessed by the service table access message;
  • a virtual memory library address is allocated to the service table access message according to the service table identification information, where the virtual memory library corresponding to the virtual memory library address includes a service table that needs to be searched; and the searching module is set according to the obtained
  • the virtual memory library address searches a service table in a corresponding virtual memory library in the RAM, and forwards the returned search result to the processing module;
  • the processing module is configured to perform corresponding service processing according to the returned search result;
  • the RAM includes at least two virtual memory banks, and in at least two virtual memory banks of
  • the allocating module further includes: a selecting unit, wherein the selecting unit is configured to query, according to the service table identification information of the service table access message, a virtual memory library containing the service table to be accessed, and select the current flow rate from the lowest A virtual memory library that generates the corresponding memory bank address information.
  • the allocating module further includes a hash computing unit; the hash computing unit is configured to access the service of the packet according to the service table when the virtual memory banks in the RAM all store the same service table
  • the table identification information is determined by a hash operation to determine a virtual memory library address corresponding to the service table access message.
  • the method further includes: constructing a module; the constructing module is configured to: after the assigning module obtains the virtual memory library address corresponding to the service table access message, according to the obtained virtual memory library address Constructing a table lookup key value; the RAM searching for a corresponding service table in the corresponding virtual memory bank according to the key value.
  • the RAM comprises: one of SRAM, TCAM and SDRAM.
  • the beneficial effects of the embodiments of the present invention are: providing a RAM, a network processing system, and a RAM lookup table method, by dividing the RAM into at least two virtual memory banks, and storing the same service table in at least two of them In the virtual memory library, and determining the appropriate virtual memory bank for access through the calculation of the network memory, the RAM access rate is effectively reduced, and the network forwarding performance is improved.
  • FIG. 1 is a schematic structural diagram of a SDRAM according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a network processor according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a network processor according to another embodiment of the present invention
  • It is a flowchart of a RAM lookup table method in an embodiment of the present invention.
  • the overall concept of the present invention is: by dividing a RAM into a plurality of virtual memory areas, and storing the same service table in at least two of them, when the service table access message needs to access the corresponding service table, the service processing The device performs corresponding calculation according to the service table identification information of the service table access message, so as to obtain a suitable virtual memory library address for the network processor to access.
  • the RAM mentioned in the present invention can be of various memory types, such as SRAM (Static RAM Static Random Access Memory), TCAM (ternary content addressable memory), SDRAM (Synchronous Dynamic RAM Synchronization).
  • Dynamic random access memory and the like can improve the efficiency of look-up table by the look-up table method in the present invention.
  • the RAM can be of various memory types
  • the SDRAM has a relatively slow rate of table lookup due to its own structure limitation.
  • the look-up table method in the present application can obtain better effects on the SDRAM, so
  • the technical solution of the present application is described by taking the RAM as an SDRAM as an example. Referring to FIG. 1, in this embodiment, the SDRAM can be divided into at least two virtual memory banks.
  • the capacity of each virtual memory bank can be allocated in an equally distributed manner.
  • the SDRAM is divided into N virtual memory banks, and in order to improve the table lookup rate, one service table may be stored in at least two virtual memory banks respectively; and a preferred storage mode is The same business table is stored in all divided virtual memory banks to maximize the efficiency of table lookup.
  • a network processing system which includes a network processor and a RAM.
  • the network processor includes: a receiving module, a parsing module, an allocating module, a searching module, and a processing module.
  • the receiving module is configured to receive the service table access message from each physical interface;
  • the parsing module is configured to parse the service table access message received by the receiving module, and obtain the service table identification information accessed by the service table access message;
  • the allocation module is configured to allocate a virtual memory library address for the service table access message according to the service table identification information;
  • the searching module is configured to search the service table in the corresponding virtual memory library in the RAM according to the obtained virtual memory library address And forwarding the returned search result to the processing module;
  • the processing module is configured to perform corresponding business processing according to the returned search result.
  • the distribution module further includes: a selection unit; the selection unit is mainly configured to query the virtual memory library containing the service table to be accessed according to the service table identification information of the service table access message, and select the current flow rate from the lowest A virtual memory library that generates the corresponding memory bank address information.
  • another network processor is further provided, which includes a receiving module, a parsing module, an allocating module, a searching module, and a processing module. The functions of the modules are also the same as those in the foregoing embodiment, except that the implementation is different.
  • the allocation module in the example further includes: a hash computing unit; the hash computing unit is configured to: when the virtual memory banks in the SDRAM all store the same service table, access the service table identification information of the packet according to the service table, The Greek operation determines the virtual memory bank address corresponding to the service table access message.
  • a hash computing unit configured to: when the virtual memory banks in the SDRAM all store the same service table, access the service table identification information of the packet according to the service table, The Greek operation determines the virtual memory bank address corresponding to the service table access message.
  • the RAM table lookup method in this application will be described in detail below in conjunction with the RAM structure and the functions of the various modules of the network processor.
  • the RAM table lookup method in this embodiment includes the following steps: Step 400: The network processor receives the service table access message from each physical interface; and proceeds to step 402; in this step, the receiving module of the network processor is mainly responsible for receiving the message from Service table access packets of each physical interface.
  • Step 402 The network processor parses the service table access message, and obtains the service table identification information accessed by the service table access message.
  • the process proceeds to step 404.
  • the parsing module is responsible for parsing the received service table access message, and
  • the service table identification information is obtained, and the obtained service table identification information is mainly information such as MAC address information or IP address information of the service table access message.
  • Step 404 Allocate a virtual memory library address for the service table access message according to the service table identification information; go to step 406;
  • the method mainly includes the following situation: when all the virtual memory banks of the RAM do not store the same service table
  • the selecting unit in the allocation module is responsible for querying the virtual memory library containing the service table to be accessed according to the service table identification information of the service table access message, and selecting a virtual memory library with the lowest current traffic volume to generate the memory library address. information.
  • the hash calculation unit in the allocation module accesses the service table identification information of the packet according to the service table, and determines the service table access message by hash operation. Virtual memory bank address.
  • Step 406 Construct a table lookup key value according to the obtained virtual memory library address; go to step 408; In this step, the constructing module is responsible for constructing a corresponding table lookup key value according to the virtual memory bank address calculated by the allocation module, that is, the virtual memory bank The address is edited into the corresponding table key.
  • Step 408 Find a corresponding service in the corresponding virtual memory library according to the table key value; go to step 410; In this step, the lookup module is responsible for finding the service table in the corresponding virtual memory pool in the RAM according to the corresponding table key value, and forwarding the returned table lookup result to the processing module.
  • Step 410 Perform corresponding service processing according to the returned table lookup result.
  • the processing module performs corresponding service processing according to the returned table lookup result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明提供一种RAM、网络处理系统和一种RAM查表方法,通过将RAM划分为至少两个虚拟内存库,且将同一个业务表存储于其中的至少两个虚拟内存库中,并通过网络存储器的计算确定适合的虚拟内存库进行访问,在提高RAM的查表速率的同时,有效的减小了RAM的访问流量压力,也使网络转发性能得到提高。同时,还可以在RAM的虚拟内存库中存储相同的业务表,最大限度上提升RAM的查表速率,并且网络处理器可以使用哈希算法来计算虚拟内存库地址信息,不仅使计算过程更为简便,也能更为有效的找到适合的虚拟内存库,进一步提高查表效率以及网络转发性能。

Description

一种 RAM、 网络处理系统和一种 RAM查表方法 技术领域 本发明涉及通信技术领域、 尤其涉及一种 RAM、 网络处理系统和一种 RAM查表 方法。 背景技术 现今网络发展速度惊人, 网络流量的增长及新业务的出现, 需要网络设备具有线 速和灵活的处理能力。 网络处理器凭借其高速处理及灵活的可编程性, 已成为当今网 络中数据处理的有效解决方案。 但是目前网络处理器转发速率的增长极其惊人, 目前主流网络处理器达到了 lOOGbps (包速率 150Mpps), 很显然现在 RAM (random access memory 随机存储器) 接口带宽增长的速率远远跟不上网络处理器转发速率的增长, 所以如何提高 RAM的 查找速率是亟需解决的问题。 发明内容 本发明实施例提供了一种 RAM、 网络处理系统和一种 RAM 查表方法, 以解决 RAM查表效率低的问题。 本发明实施例采用的技术方案如下: 一种 RAM查表方法, 包括以下步骤: 网络处理器接收来自各物理接口的业务表访问报文; 所述网络处理器解析所述业务表访问报文, 获取所述业务表访问报文所访问的业 务表识别信息; 根据所述业务表识别信息为所述业务表访问报文分配虚拟内存库地址; 所述虚拟 内存库地址为: RAM划分的至少两个虚拟内存库中, 所对应的其中一个含有需要查 找的业务表的虚拟内存库地址; 在所述 RAM的至少两个虚拟内存库中, 同一个业务 表存储于其中至少两个所述虚拟内存库内; 根据所述虚拟内存库地址访问对应的虚拟内存库, 查找对应的业务表。 优选地, 分配所述虚拟内存库地址信息的具体过程为: 所述网络处理器根据所述 业务表访问报文的业务表识别信息查询含有其所要访问业务表的虚拟内存库, 并从中 选择当前流量最低的一个虚拟内存库, 生成虚拟内存库地址信息。 优选地, 在所述 RAM的至少两个虚拟内存库中, 所有所述虚拟内存库都存储有 相同的业务表。 优选地, 分配所述虚拟内存库地址信息的具体过程为: 所述网络处理器根据所述 业务表访问报文的业务表识别信息, 通过哈希运算确定所述业务表访问报文所对应的 虚拟内存库地址。 优选地, 在获得所述业务表访问报文对应的所述虚拟内存库地址后, 还包括: 根 据得到的所述虚拟内存库地址构造查表键值, 根据所述键值在对应的所述虚拟内存库 中查找对应的业务表。 优选地, 所述 RAM包括: SRAM、 TCAM和 SDRAM中的一种。 本发明实施例还提供一种 RAM, 所述 RAM中包括至少两个虚拟内存库, 同一个 业务表存储于其中至少两个所述虚拟内存库内。 优选地, 所述 RAM的至少两个虚拟内存库中, 所有所述虚拟内存库都存储有相 同的业务表。 本发明实施例还提供一种网络处理系统, 包括网络处理器和 RAM; 所述网络处理器包括接收模块、 解析模块、 分配模块、 查找模块和处理模块; 所述接收模块设置为接收来自各物理接口的业务表访问报文; 所述解析模块设置为解析所述接收模块接收的所述业务表访问报文, 获取所述业 务表访问报文所访问的业务表识别信息; 所述分配模块设置为根据所述业务表识别信息为所述业务表访问报文分配虚拟内 存库地址, 该虚拟内存库地址所对应的虚拟内存库中含有需要查找的业务表; 所述查找模块设置为根据得到的所述虚拟内存库地址查找 RAM中对应的虚拟内 存库中的业务表, 并向处理模块转发返回的查找结果; 所述处理模块设置为根据返回的查找结果进行相应的业务处理; 所述 RAM包括至少两个虚拟内存库, 且在所述 RAM的至少两个虚拟内存库中, 同一个业务表存储于其中至少两个所述虚拟内存库内。 优选地, 所述分配模块还包括: 选择单元; 所述选择单元设置为根据所述业务表 访问报文的业务表识别信息查询含有其所要访问业务表的虚拟内存库, 并从中选择当 前流量最低的一个虚拟内存库, 生成相应的内存库地址信息。 优选地, 所述分配模块还包括哈希计算单元; 所述哈希计算单元设置为当所述 RAM中的虚拟内存库都存储有相同的业务表时,根据所述业务表访问报文的业务表识 别信息, 通过哈希运算确定所述业务表访问报文所对应的虚拟内存库地址。 优选地, 其特征在于, 还包括: 构造模块; 所述构造模块设置为所述分配模块获 得所述业务表访问报文对应的所述虚拟内存库地址后, 根据得到的所述虚拟内存库地 址构造查表键值; 所述 RAM根据所述键值在对应的所述虚拟内存库中查找对应的业 务表。 优选地, 所述 RAM包括: SRAM、 TCAM和 SDRAM中的一种。 本发明实施例的有益效果是: 提供一种 RAM、 网络处理系统和一种 RAM查表方 法, 通过将 RAM划分为至少两个虚拟内存库, 且将同一个业务表存储于其中的至少 两个虚拟内存库中, 并通过网络存储器的计算确定适合的虚拟内存库进行访问, 在提 高 RAM的查表速率的同时,有效的减小了 RAM的访问流量压力,也使网络转发性能 得到提高。 同时, 还可以在 RAM的虚拟内存库中存储相同的业务表, 最大限度上提升 RAM 的查表速率, 并且网络处理器可以使用哈希算法来计算虚拟内存库地址信息, 不仅使 计算过程更为简便, 也能更为有效的找到适合的虚拟内存库, 进一步提高查表效率以 及网络转发性能。 附图说明 图 1为本发明一实施例中 SDRAM结构示意图; 图 2为本发明一实施例中网络处理器结构示意图; 图 3为本发明又一实施例中网络处理器结构示意图; 以及 图 4为本发明一实施例中 RAM查表方法流程图。 具体实施方式 本发明的整体构思为: 通过将 RAM划分为多个虚拟内存区, 并在其中至少两个 中存储相同的业务表, 当业务表访问报文需要访问相应的业务表时, 业务处理器根据 业务表访问报文的业务表识别信息进行相应的计算,从而获得适合的虚拟内存库地址, 以供网络处理器进行访问。而在本发明中所提及的 RAM,可以为多种不同的存储器类 型, 如 SRAM ( Static RAM 静态随机存储器)、 TCAM(ternary content addressable memory三态内容寻址存储器)、 SDRAM ( Synchronous Dynamic RAM 同步动态随机 存储器) 等都可以通过本发明中的查表方法提高查表效率。 为使本发明技术方案和优 点更加清楚, 下面通过具体实施方式结合附图对本发明作进一步详细说明。 在本发明中, 虽然 RAM可以为多种存储器类型, 但 SDRAM由于自身结构限制, 其查表速率相对较慢, 本申请中的查表方法应用在 SDRAM上能获得较好的效果, 所 以在本实施例中以 RAM为 SDRAM为例对本申请的技术方案进行说明。 请参考图 1, 在本实施例中, 可以将 SDRAM分为至少两个虚拟内存库, 为了保 持整个 SDRAM的查表速率, 较优的, 各个虚拟内存库的容量可以采用均分的方式进 行分配。在本实施例中,将 SDRAM分为 N个虚拟内存库,而为了实现提高查表速率, 可以将一个业务表分别存储在至少两个虚拟内存库中; 而一种较优的存储方式为, 在 所有划分的虚拟内存库中都存储相同的业务表,以达到最大限度提升查表效率的目的。 而虚拟内存库的划分个数可以为任意多个, 其中较优的划分个数 N的计算方式: N = F2/F1 , 其中 F1为单个虚拟内存库的查表频率, F2为实际需要业务表的查表频率。 请参考图 2和图 3, 在本实施例中还提供一种网络处理系统, 其包括网络处理器 和 RAM; 其中网络处理器包括: 接收模块、 解析模块、 分配模块、 查找模块和处理模 块。 其中, 接收模块主要设置为接收来自各物理接口的业务表访问报文; 解析模块设 置为解析所述接收模块接收的业务表访问报文, 获取业务表访问报文所访问的业务表 识别信息; 分配模块设置为根据所述业务表识别信息为所述业务表访问报文分配虚拟 内存库地址; 查找模块设置为根据得到的所述虚拟内存库地址查找 RAM中对应的虚 拟内存库中的业务表, 并向处理模块转发返回的查找结果; 处理模块设置为根据返回 的查找结果进行相应的业务处理。 在本实施例中, 其分配模块还包括: 选择单元; 该选择单元主要设置为根据业务 表访问报文的业务表识别信息查询含有其所要访问业务表的虚拟内存库, 并从中选择 当前流量最低的一个虚拟内存库, 生成相应的内存库地址信息。 在本实施例中, 还提供另一种网络处理器, 其包括接收模块、 解析模块、 分配模 块、 查找模块和处理模块; 且各模块的作用也与上述实施例中相同, 不同的是本实施 例中的分配模块还包含: 哈希计算单元; 该哈希计算单元设置为当 SDRAM中的虚拟 内存库都存储有相同的业务表时, 根据业务表访问报文的业务表识别信息, 通过哈希 运算确定业务表访问报文所对应的虚拟内存库地址。 请参考图 4,下面结合 RAM结构以及网络处理器的各个模块作用对本申请中 RAM 查表方法进行详细说明。 本实施例中的 RAM查表方法包括以下步骤: 步骤 400: 网络处理器接收来自各物理接口的业务表访问报文; 进入步骤 402; 在该步骤中, 网络处理器的接收模块主要负责接收来自各物理接口的业务表访问 报文。 步骤 402: 网络处理器解析业务表访问报文, 获取业务表访问报文所访问的业务 表识别信息; 进入步骤 404; 在本步骤中, 解析模块负责解析接收到的业务表访问报文, 并获取其中的业务表 识别信息, 获取的业务表识别信息主要为业务表访问报文的 MAC地址信息或者 IP地 址信息等信息。 步骤 404: 根据业务表识别信息为业务表访问报文分配虚拟内存库地址; 进入步 骤 406; 在本步骤中, 主要包括以下情况, 当 RAM的所有虚拟内存库中并未存储同一个 业务表时, 分配模块中的选择单元则负责根据所述业务表访问报文的业务表识别信息 查询含有其所要访问业务表的虚拟内存库, 并从中选择当前流量最低的一个虚拟内存 库, 生成内存库地址信息。 而当 RAM中的虚拟内存库都包含有同一个业务表时, 分 配模块中的哈希计算单元则根据业务表访问报文的业务表识别信息, 通过哈希运算确 定业务表访问报文所对应的虚拟内存库地址。 步骤 406: 根据得到的虚拟内存库地址构造查表键值; 进入步骤 408; 在本步骤中, 构造模块负责根据分配模块计算得到的虚拟内存库地址构造相应的 查表键值, 即将虚拟内存库地址编辑进对应的查表键值中。 步骤 408: 根据查表键值在对应的虚拟内存库中查找对应的业务; 进入步骤 410; 在该步骤中, 查找模块负责根据相应的查表键值查找 RAM中对应的虚拟内存库 中的业务表, 并将返回的查表结果转发给处理模块。 步骤 410: 根据返回的查表结果进行相应的业务处理。 在本步骤中, 处理模块根据返回的查表结果进行相应的业务处理。 以上内容是结合具体的实施方式对本发明所作的进一步详细说明, 不能认定本发 明的具体实施只局限于这些说明。 对于本发明所属技术领域的普通技术人员来说, 在 不脱离本发明构思的前提下, 还可以做出若干简单推演或替换, 都应当视为属于本发 明的保护范围。

Claims

权 利 要 求 书
1. 一种 RAM查表方法, 包括: 网络处理器接收来自各物理接口的业务表访问报文;
所述网络处理器解析所述业务表访问报文, 获取所述业务表访问报文所访 问的业务表识别信息;
根据所述业务表识别信息为所述业务表访问报文分配虚拟内存库地址; 所 述虚拟内存库地址为: RAM 划分的至少两个虚拟内存库中, 所对应的其中一 个含有需要查找的业务表的虚拟内存库地址; 在所述 RAM的至少两个虚拟内 存库中, 同一个业务表存储于其中至少两个所述虚拟内存库内; 根据所述虚拟内存库地址访问对应的虚拟内存库, 查找对应的业务表。
2. 如权利要求 1所述的 RAM查表方法, 其中, 分配所述虚拟内存库地址信息的 具体过程为: 所述网络处理器根据所述业务表访问报文的业务表识别信息查询 含有其所要访问业务表的虚拟内存库, 并从中选择当前流量最低的一个虚拟内 存库, 生成虚拟内存库地址信息。
3. 如权利要求 1所述的 RAM查表方法, 其中, 在所述 RAM的至少两个虚拟内 存库中, 所有所述虚拟内存库都存储有相同的业务表。
4. 如权利要求 3所述的 RAM查表方法, 其中, 分配所述虚拟内存库地址信息的 具体过程为: 所述网络处理器根据所述业务表访问报文的业务表识别信息, 通 过哈希运算确定所述业务表访问报文所对应的虚拟内存库地址。
5. 如权利要求 1-4中任一项所述的 RAM查表方法, 其中, 在获得所述业务表访 问报文对应的所述虚拟内存库地址后, 还包括: 根据得到的所述虚拟内存库地 址构造查表键值,根据所述键值在对应的所述虚拟内存库中查找对应的业务表。
6. 如权利要求 1-4中任一项所述的 RAM查表方法,其中,所述 RAM包括: SRAM、 TCAM和 SDRAM中的一种。
7. 一种 RAM, 包括至少两个虚拟内存库, 其中, 同一个业务表存储于其中至少两 个所述虚拟内存库内。
8. 如权利要求 7所述的 RAM, 其中, 所述 RAM的至少两个虚拟内存库中, 所有 所述虚拟内存库都存储有相同的业务表。
9. 一种网络处理系统, 包括网络处理器和 RAM;
所述网络处理器包括接收模块、 解析模块、 分配模块、 查找模块和处理模 块;
所述接收模块设置为接收来自各物理接口的业务表访问报文; 所述解析模块设置为解析所述接收模块接收的所述业务表访问报文, 获取 所述业务表访问报文所访问的业务表识别信息;
所述分配模块设置为根据所述业务表识别信息为所述业务表访问报文分配 虚拟内存库地址, 该虚拟内存库地址所对应的虚拟内存库中含有需要查找的业 务表;
所述查找模块设置为根据得到的所述虚拟内存库地址查找 RAM中对应的 虚拟内存库中的业务表, 并向处理模块转发返回的查找结果;
所述处理模块设置为根据返回的查找结果进行相应的业务处理; 所述 RAM包括至少两个虚拟内存库, 且在所述 RAM的至少两个虚拟内 存库中, 同一个业务表存储于其中至少两个所述虚拟内存库内。
10. 如权利要求 9所述的网络处理系统, 其中, 所述分配模块还包括: 选择单元; 所述选择单元设置为根据所述业务表访问报文的业务表识别信息查询含有其所 要访问业务表的虚拟内存库, 并从中选择当前流量最低的一个虚拟内存库, 生 成相应的内存库地址信息。
11. 如权利要求 9所述的网络处理系统,其中,所述分配模块还包括哈希计算单元; 所述哈希计算单元设置为当所述 RAM中的虚拟内存库都存储有相同的业务表 时, 根据所述业务表访问报文的业务表识别信息, 通过哈希运算确定所述业务 表访问报文所对应的虚拟内存库地址。
12. 如权利要求 9-11中任一项所述的网络处理系统, 其中, 还包括: 构造模块; 所 述构造模块设置为所述分配模块获得所述业务表访问报文对应的所述虚拟内存 库地址后, 根据得到的所述虚拟内存库地址构造查表键值; 所述 RAM根据所 述键值在对应的所述虚拟内存库中查找对应的业务表。
13. 如权利要求 9-11中任一项所述的网络处理系统,其中,所述 RAM包括: SRAM, TCAM和 SDRAM中的一种。
PCT/CN2013/089238 2012-12-18 2013-12-12 一种ram、网络处理系统和一种ram查表方法 WO2014094569A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/653,506 US20150350076A1 (en) 2012-12-18 2013-12-12 Ram, network processing system and table lookup method for ram
EP13864996.7A EP2937793B1 (en) 2012-12-18 2013-12-12 Ram, network processing system and table look-up method for ram
RU2015127508A RU2642358C2 (ru) 2012-12-18 2013-12-12 Ram, система обработки данных сети и способ табличного поиска для ram

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210549857.X 2012-12-18
CN201210549857.XA CN103064901B (zh) 2012-12-18 2012-12-18 一种ram、网络处理系统和一种ram查表方法

Publications (1)

Publication Number Publication Date
WO2014094569A1 true WO2014094569A1 (zh) 2014-06-26

Family

ID=48107531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/089238 WO2014094569A1 (zh) 2012-12-18 2013-12-12 一种ram、网络处理系统和一种ram查表方法

Country Status (5)

Country Link
US (1) US20150350076A1 (zh)
EP (1) EP2937793B1 (zh)
CN (1) CN103064901B (zh)
RU (1) RU2642358C2 (zh)
WO (1) WO2014094569A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064901B (zh) * 2012-12-18 2017-02-22 中兴通讯股份有限公司 一种ram、网络处理系统和一种ram查表方法
EP3110092B1 (en) * 2014-03-24 2019-03-13 Huawei Technologies Co., Ltd. Method for determining storage location for tables, forwarding device, and controller
CN112632340B (zh) * 2020-12-28 2024-04-16 苏州盛科通信股份有限公司 查表方法及装置、存储介质以及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1655534A (zh) * 2005-02-25 2005-08-17 清华大学 核心路由器上支持访问控制列表功能的双栈兼容路由查找器
US20090216994A1 (en) * 2008-02-25 2009-08-27 International Business Machines Corporation Processor, method and computer program product for fast selective invalidation of translation lookaside buffer
CN102402611A (zh) * 2011-12-12 2012-04-04 盛科网络(苏州)有限公司 一种用tcam实现关键字快速查找并读表的方法
CN103064901A (zh) * 2012-12-18 2013-04-24 中兴通讯股份有限公司 一种ram、网络处理系统和一种ram查表方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265227A (en) * 1989-11-14 1993-11-23 Intel Corporation Parallel protection checking in an address translation look-aside buffer
US6067547A (en) * 1997-08-12 2000-05-23 Microsoft Corporation Hash table expansion and contraction for use with internal searching
US6963566B1 (en) * 2001-05-10 2005-11-08 Advanced Micro Devices, Inc. Multiple address lookup engines running in parallel in a switch for a packet-switched network
JP2003338835A (ja) * 2002-05-20 2003-11-28 Fujitsu Ltd パケットスイッチ及び方法
US7710972B2 (en) * 2006-12-21 2010-05-04 Intel Corporation Discrete table descriptor for unified table management
US20080281789A1 (en) * 2007-05-10 2008-11-13 Raza Microelectronics, Inc. Method and apparatus for implementing a search engine using an SRAM
CN100596077C (zh) * 2007-08-16 2010-03-24 华为技术有限公司 通道化逻辑单通道统计的方法和装置
US8284664B1 (en) * 2007-09-28 2012-10-09 Juniper Networks, Inc. Redirecting data units to service modules based on service tags and a redirection table
CN102067528B (zh) * 2008-06-19 2014-01-15 马维尔国际贸易有限公司 用于搜索的级联存储器表
CN101290635A (zh) * 2008-06-24 2008-10-22 中兴通讯股份有限公司 一种基于特征字的内存管理方法及其装置
EP2377273B1 (en) * 2009-01-12 2015-08-26 Hewlett-Packard Development Company, L.P. Reducing propagation of message floods in computer networks
US8488489B2 (en) * 2009-06-16 2013-07-16 Lsi Corporation Scalable packet-switch
EP2665003A1 (en) * 2009-06-19 2013-11-20 Blekko, Inc. Search term based query method with modifiers expressed as slash operators
CN101655824A (zh) * 2009-08-25 2010-02-24 北京广利核系统工程有限公司 一种双口ram互斥访问的实现方法
US9280609B2 (en) * 2009-09-08 2016-03-08 Brocade Communications Systems, Inc. Exact match lookup scheme

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1655534A (zh) * 2005-02-25 2005-08-17 清华大学 核心路由器上支持访问控制列表功能的双栈兼容路由查找器
US20090216994A1 (en) * 2008-02-25 2009-08-27 International Business Machines Corporation Processor, method and computer program product for fast selective invalidation of translation lookaside buffer
CN102402611A (zh) * 2011-12-12 2012-04-04 盛科网络(苏州)有限公司 一种用tcam实现关键字快速查找并读表的方法
CN103064901A (zh) * 2012-12-18 2013-04-24 中兴通讯股份有限公司 一种ram、网络处理系统和一种ram查表方法

Also Published As

Publication number Publication date
CN103064901B (zh) 2017-02-22
US20150350076A1 (en) 2015-12-03
EP2937793A4 (en) 2015-12-30
RU2015127508A (ru) 2017-01-24
RU2642358C2 (ru) 2018-01-24
EP2937793A1 (en) 2015-10-28
CN103064901A (zh) 2013-04-24
EP2937793B1 (en) 2018-03-14

Similar Documents

Publication Publication Date Title
US11102120B2 (en) Storing keys with variable sizes in a multi-bank database
EP2793436B1 (en) Content router forwarding plane architecture
JP6190754B2 (ja) ネットワークスイッチにおける集中型メモリプールを用いるテーブル検索のための装置および方法
US7606236B2 (en) Forwarding information base lookup method
Quan et al. Scalable name lookup with adaptive prefix bloom filter for named data networking
US9704574B1 (en) Method and apparatus for pattern matching
US7281085B1 (en) Method and device for virtualization of multiple data sets on same associative memory
US20130031559A1 (en) Method and apparatus for assignment of virtual resources within a cloud environment
Bando et al. FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
CN1270728A (zh) 快速路由查找的方法和系统
US11502956B2 (en) Method for content caching in information-centric network virtualization
US9485179B2 (en) Apparatus and method for scalable and flexible table search in a network switch
WO2014094569A1 (zh) 一种ram、网络处理系统和一种ram查表方法
US8472350B2 (en) Bank aware multi-bit trie
CN109981476B (zh) 一种负载均衡方法和装置
Bando et al. Flashlook: 100-gbps hash-tuned route lookup architecture
CN103457855A (zh) 无类域间路由表建立、以及报文转发的方法和装置
CN102739550B (zh) 基于随机副本分配的多存储器流水路由体系结构
Qiu et al. Ultra-low-latency and flexible in-memory key-value store system design on CPU-FPGA
WO2014169874A1 (zh) 表项管理装置、表项管理方法及计算机存储介质
Saxena et al. Scalable, high-speed on-chip-based NDN name forwarding using FPGA
WO2021027645A1 (zh) 一种网络报文发送的方法、装置和网络处理器
Jing et al. An Efficient Name Look-up Architecture Based on Binary Search in NDN Networking
Goel et al. Energy efficient air indexing schemes for single and multi-level wireless channels
Dai et al. A truly scalable IP lookup algorithm for next generation internet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13864996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 14653506

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2013864996

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2015127508

Country of ref document: RU

Kind code of ref document: A