CN110460529B - Data processing method and chip for forwarding information base storage structure of content router - Google Patents

Data processing method and chip for forwarding information base storage structure of content router Download PDF

Info

Publication number
CN110460529B
CN110460529B CN201910572071.1A CN201910572071A CN110460529B CN 110460529 B CN110460529 B CN 110460529B CN 201910572071 A CN201910572071 A CN 201910572071A CN 110460529 B CN110460529 B CN 110460529B
Authority
CN
China
Prior art keywords
chip
forwarding information
index
storage unit
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910572071.1A
Other languages
Chinese (zh)
Other versions
CN110460529A (en
Inventor
闫柳
李卓
刘开华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910572071.1A priority Critical patent/CN110460529B/en
Publication of CN110460529A publication Critical patent/CN110460529A/en
Application granted granted Critical
Publication of CN110460529B publication Critical patent/CN110460529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/748Address table lookup; Address filtering using longest matching prefix

Abstract

The invention discloses a FIB storage structure of a content router, which comprises an on-chip storage unit and an off-chip storage unit. The on-chip storage unit uses a high-speed memory, and a plurality of index structures corresponding to different name prefix component numbers are deployed so as to realize the quick index of the name data based on the longest name prefix matching mechanism; the off-chip storage unit deploys a plurality of FIB storage pools corresponding to the index structures using a large-capacity low-speed memory to store actual forwarding information. The on-chip index structure is realized based on a neural network, and the storage efficiency is improved by learning the distribution condition of index contents in a memory. For misjudgment possibly generated by an on-chip index structure, a chain address method is used in an off-chip FIB storage pool to process conflicts, namely, data mapped to the same address are connected in a form of a linked list. The structure of the invention is suitable for the hardware level of the current memory, can effectively support the longest name prefix matching mechanism and improve the retrieval speed of the name data.

Description

Data processing method and chip for forwarding information base storage structure of content router
Technical Field
The invention belongs to the field of high-performance router storage structure design, and particularly aims at the problems of structure design and data processing of named data network content router FIB.
Background
With the continuous development of internet and mobile communication technology, users no longer satisfy the traditional point-to-point communication mode, but hope to widely distribute and share data information. However, the internet communication model based on the conventional TCP/IP technology gradually exposes the disadvantages of IP address space exhaustion, poor mobility, low security, and the like, so that the requirements of users on low-latency data transmission and high-quality communication cannot be met. For these reasons, the named data network has gained wide attention as a new future network architecture once it is proposed.
Compared to the traditional TCP/IP internet, the named data network uses a fully data content oriented communication mode, no longer concerning where the content is stored, but only concerning the content itself. By deploying the buffer memory at the routing node, the named data network greatly reduces the network load, improves the sharing rate of network resources and improves the performance of data transmission.
In named data networks, all communication is driven by the consumer, by exchanging interest packets and data packets containing the name identification. In order to request the required data, the consumer firstly sends an interest packet to the network, the router records the incoming interface of the interest packet, and forwards the interest packet according to a Forwarding Information Base (FIB) by using the name of the interest packet until reaching the network node containing the corresponding data packet, and the data packet is sent back to the consumer. Thus, the design of the FIB is directly related to the operational performance of the named data network content router. However, FIB research of content routers faces a series of challenges that need to be addressed due to the many features of content names that are different from IP addresses. Firstly, the FIB uses a name string as an index main key, has the basic characteristics of being long and borderless, and content names are extremely personalized in different application scenes, so that how to support quick retrieval of differential names becomes a recognized difficult problem. Secondly, the FIB entries can reach millions of levels, and more storage space is needed to record the content names and forwarding information which are far more complex than the IP addresses, so how to efficiently store the forwarding information in the limited memory is a very challenging problem. Thirdly, the content router FIB has a mechanism different from the Longest Name Prefix Matching (LNPM) mechanism of the IP network router, and how to support the mechanism is still a problem to be solved.
Since the 2010 naming of data networks was proposed, research and design of content router FIBs has attracted extensive attention in both foreign and domestic academic communities. However, the currently proposed FIB design scheme is difficult to well balance storage consumption and name retrieval speed, and how to effectively support the differentiated name retrieval and the LNPM mechanism is not fully considered. Therefore, there is an urgent need to provide new solutions to the above problems and challenges by designing a novel overall FIB solution.
Disclosure of Invention
Aiming at the prior art, the invention provides a novel content router FIB storage structure and a data processing method thereof. The structure is suitable for the hardware level of the current memory, can effectively support an LNPM mechanism, and improves the retrieval speed of name data.
In order to solve the technical problem, the FIB storage structure of the content router provided by the invention comprises an on-chip storage unit and an off-chip storage unit; the on-chip storage unit uses a high-speed memory, wherein a plurality of index structures corresponding to different numbers of name prefix components are deployed so as to realize the name data fast index based on the LNPM mechanism;
the index structure in the on-chip storage unit is realized on the basis of a neural network, and comprises an input unit, a model unit and an output unit; the input unit is used for converting the routing table index data into input vectors, dividing each piece of index data into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to finally obtain the input vector corresponding to the index data; the model unit is used for training and predicting cumulative distribution function values, and comprises a first-stage neural network and a plurality of second-stage neural networks; the output unit multiplies the cumulative distribution function value predicted by the model unit by the total number of the slots in the mapping table to obtain a mapping position in the mapping table, and then obtains a final index address according to a base address corresponding to a part where the position is located and the actual memory address offset recorded in the position;
the off-chip storage unit uses a low-speed memory with a storage space reaching one gigabyte, and deploys a plurality of FIB storage pools corresponding to the index structures to store actual forwarding information; meanwhile, the off-chip storage unit processes misjudgment possibly generated by the on-chip index structure.
Further, in the FIB storage structure of the content router according to the present invention, the off-chip storage unit processes the misjudgment that may be generated by the on-chip index structure, and uses a chain address method in the off-chip FIB storage pool to process a conflict, that is, data mapped to the same address are connected in a form of a linked list.
Meanwhile, the invention also provides a data processing method aiming at the FIB storage structure of the content router, which mainly comprises the steps of searching the forwarding information of the interest packet and updating the forwarding information.
The method for retrieving the forwarding information of the interest packet comprises the following steps:
step one, inputting name data: inputting the content name of the interest packet into an FIB storage structure;
step two, the input processing of the on-chip memory unit: the input unit splits the content name vector of the interest packet into a plurality of sub-vectors, and then performs bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step three, model operation of the on-chip memory unit: inputting the fixed-length input vector obtained in the second step into a model unit, and calculating a predicted cumulative distribution function value;
step four, mapping the position of the on-chip memory unit: the output unit multiplies the predicted cumulative distribution function value in the step three by the total number of the slots in the mapping table to obtain the mapping position in the mapping table;
step five, judging the existence of the data of the on-chip storage unit: if the value of the position in the mapping table is not 0, the name prefix corresponding to the interest packet is in the FIB storage structure, and step six is executed; if the value of the position in the mapping table is 0, the name prefix corresponding to the interest packet is not in the FIB storage structure, and the step eight is carried out;
sixthly, calculating the index address of the on-chip storage unit: calculating the part of the position in the mapping table, and solving an index address according to a base address corresponding to the part of the position and the actual memory address offset recorded in the position;
seventhly, outputting the forwarding information of the off-chip storage unit: accessing the corresponding address in the off-chip FIB storage pool according to the index address, outputting next hop routing forwarding information, and turning to the ninth step;
step eight, outputting a retrieval result as follows: cannot be matched;
and step nine, finishing the operation of searching the forwarding information of the interest packet.
The updating of the forwarding information comprises the following steps:
inputting name data and update types: inputting the content name to be updated and the corresponding update type into an FIB storage structure;
step two, the input processing of the on-chip memory unit: splitting the content name vector into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step three, model operation of the on-chip memory unit: inputting the fixed-length input vector into a model unit, and calculating a predicted cumulative distribution function value;
step four, mapping the position of the on-chip memory unit: the output unit multiplies the predicted cumulative distribution function value in the third step by the total number of the slots in the mapping table to obtain a mapping position in the mapping table, and calculates the part of the position in the mapping table;
step five, judging the updating type of the on-chip storage unit: if the update type is addition, executing a sixth step; if the update type is modification, go to step eight; if the update type is delete, go to step nine;
step six, adding operation of the on-chip storage unit: inserting the updated content name into an index structure corresponding to the prefix length of the content name in the chip, and obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position;
seventhly, adding operation of the off-chip storage unit: accessing the corresponding address in the off-chip FIB storage pool according to the index address, inserting corresponding route forwarding information, and turning to the step eleven;
step eight, modifying operation of the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, accessing a corresponding address in the off-chip FIB storage pool according to the index address, modifying corresponding routing forwarding information, and turning to the step eleven;
ninthly, deleting the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, accessing a corresponding address in the off-chip FIB storage pool according to the index address, and deleting corresponding routing forwarding information;
tenth, deleting the on-chip storage unit: clearing the corresponding position in the mapping table;
and step eleven, finishing the updating operation of the forwarding information.
Compared with the prior art, the invention has the beneficial effects that:
the FIB storage structure of the content router and the data processing method thereof are used for carrying out software deployment test on a computer configured as Intel Xeon E5-1650 v23.50GHz and DDR 324 GB SDRAM. The experimental result shows that under the condition that the misjudgment probability is 1%, the on-chip storage consumption of the FIB storage structure is 58.258MB, and the FIB storage structure can be deployed on a high-speed memory SRAM. Under the condition of supporting the LNPM mechanism, the actual throughput of the FIB storage structure is 1,164 ten thousand packets/second, and the requirement of the current Internet for rapidly processing the packets can be met. Therefore, the FIB storage structure of the content router and the data processing method thereof designed by the invention are suitable for the hardware level of the current memory, can effectively support an LNPM mechanism, and improve the retrieval speed of name data.
Drawings
FIG. 1 is a structural layout diagram of the FIB storage structure of the content router of the present invention;
FIG. 2 is a diagram of an on-chip index structure of the FIB storage structure of the content router of the present invention;
FIG. 3 is a block diagram illustrating a flow chart of a method for retrieving information related to forwarding of interest packets in the data processing method according to the present invention;
fig. 4 is a flow chart of a method for updating forwarding information in the data processing method of the present invention.
Detailed Description
The invention will be further described with reference to the following figures and specific examples, which are not intended to limit the invention in any way.
The invention provides a FIB storage structure of a content router, which comprises an on-chip storage unit and an off-chip storage unit. The on-chip storage unit uses a high-speed memory and deploys a plurality of index structures corresponding to different name prefix component numbers so as to realize the name data fast index based on the LNPM mechanism; the off-chip storage unit uses a low-speed memory with a large storage space to deploy a plurality of FIB storage pools corresponding to the index structures so as to store actual forwarding information. The on-chip index structure is realized based on a neural network, and the storage efficiency is improved by learning the distribution condition of index contents in a memory. For misjudgment possibly generated by an on-chip index structure, a chain address method is used in an off-chip FIB storage pool to process conflicts, namely, data mapped to the same address are connected in a form of a linked list.
The invention also provides a data processing method of the FIB storage structure of the content router, which comprises a forwarding information retrieval method and a forwarding information updating method of the interest packet. In the process of retrieving the forwarding information of the interest packet, firstly, executing parallel retrieval operation in an on-chip index structure group according to an LNPM mechanism, and judging whether a name prefix corresponding to the interest packet is in the FIB storage structure or not; if the longest matching prefix exists, the address of the longest matching prefix in the off-chip FIB storage pool is obtained according to the output result of the index structure, and then next hop routing forwarding information is read; and if the matching prefix does not exist, outputting a retrieval result of 'no matching'. In the process of updating the forwarding information, if adding operation is executed, firstly inserting the name into an index structure corresponding to the prefix length of the name in the chip, then accessing an off-chip FIB storage pool according to the output result of the index structure, and inserting corresponding routing forwarding information; if the modification operation is executed, directly accessing an off-chip FIB storage pool according to an output result of the on-chip index structure, and modifying corresponding routing forwarding information; and if the deletion operation is executed, synchronously deleting the on-chip index structure and the corresponding record in the off-chip FIB storage pool.
As shown in fig. 1, the FIB storage structure of the content router according to the present invention includes an on-chip storage unit and an off-chip storage unit. The concrete description is as follows:
the on-chip storage unit uses a high-speed memory and deploys a plurality of index structures corresponding to different name prefix component numbers so as to realize the name data fast index based on the LNPM mechanism; the off-chip storage unit uses a low-speed memory with a large storage space to deploy a plurality of FIB storage pools corresponding to the index structures so as to store actual forwarding information.
The index structure in the on-chip storage unit is realized based on a neural network, and the storage efficiency is improved by learning the distribution condition of index contents in the memory. As shown in fig. 2, the input unit is configured to convert the routing table index data into input vectors, split each piece of index data into a plurality of sub-vectors, and then perform bitwise xor operation on elements at the same position in all the sub-vectors to finally obtain an input vector corresponding to the index data; the model unit is used for training and predicting cumulative distribution function values and comprises a first-stage neural network and a plurality of second-stage neural networks; the output unit multiplies the predicted cumulative distribution function value by the total number of the slots in the mapping table to obtain the mapping position in the mapping table, and then obtains the final index address according to the base address corresponding to the part of the position and the actual memory address offset recorded in the position. For misjudgment possibly generated by an on-chip index structure, a chain address method is used in an off-chip FIB storage pool to process conflicts, namely, data mapped to the same address are connected in a form of a linked list.
The data processing method aiming at the FIB storage structure of the content router comprises a forwarding information retrieval method of an interest packet and a forwarding information updating method:
(1) the forwarding information retrieval method for the interest packet, as shown in fig. 3, includes the following steps:
step one, inputting name data: inputting the content name of the interest packet into the FIB storage structure;
step two, input processing of the on-chip memory unit: splitting the content name vector into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step three, model operation of the on-chip memory unit: inputting the fixed-length input vector into a neural network model, and calculating a predicted cumulative distribution function value;
step four, mapping the position of the on-chip memory unit: multiplying the predicted cumulative distribution function value by the total number of the slots in the mapping table to obtain a mapping position in the mapping table;
step five, judging the existence of data of the on-chip memory unit: if the value of the position in the mapping table is not 0, the name prefix corresponding to the interest packet is continuously executed in the FIB storage structure; if the value of the position in the mapping table is 0, the name prefix corresponding to the interest packet is not in the FIB storage structure, and the step eight is carried out;
sixthly, calculating the index address of the on-chip storage unit: calculating the part of the position in the mapping table, and solving an index address according to a base address corresponding to the part of the position and the actual memory address offset recorded in the position;
seventhly, outputting the forwarding information of the off-chip storage unit: accessing the corresponding address in the off-chip FIB storage pool according to the index address, outputting next hop routing forwarding information, and turning to the ninth step;
step eight, outputting a retrieval result which cannot be matched;
and step nine, finishing the operation of searching the forwarding information of the interest packet.
(2) The forwarding information updating method, as shown in fig. 4, includes the following steps:
step one, inputting name data: inputting the content name to be updated into the FIB storage structure;
step two, input processing of the on-chip memory unit: splitting the content name vector into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step three, model operation of the on-chip memory unit: inputting the fixed-length input vector into a neural network model, and calculating a predicted cumulative distribution function value;
step four, mapping the position of the on-chip memory unit: multiplying the predicted cumulative distribution function value by the total number of the slots in the mapping table to obtain a mapping position in the mapping table, and calculating the part of the position in the mapping table;
step five, judging the updating type of the on-chip storage unit: if the update type is 'add', continuing to execute; if the update type is 'modified', the step eight is carried out; if the update type is 'delete', go to step nine;
step six, adding operation of on-chip storage units: inserting the name into an index structure corresponding to the prefix length of the name in the chip, and obtaining an index address according to a base address corresponding to the part of the position in a mapping table and the actual memory address offset recorded in the position;
seventhly, adding operation of the off-chip storage unit: accessing the corresponding address in the off-chip FIB storage pool according to the index address, inserting corresponding route forwarding information, and turning to the step eleven;
step eight, modifying operation of the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, accessing a corresponding address in the off-chip FIB storage pool according to the index address, modifying corresponding routing forwarding information, and turning to the step eleven;
step nine, deleting the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, accessing a corresponding address in the off-chip FIB storage pool according to the index address, and deleting corresponding routing forwarding information;
step ten, deleting the on-chip storage unit: clearing the corresponding position in the mapping table;
and step eleven, finishing the updating operation of the forwarding information.
Example (b):
the FIB storage structure of the content router and the data processing method thereof are used for carrying out software deployment test on a computer configured as Intel Xeon E5-1650 v23.50GHz and DDR 324 GB SDRAM. The experimental result shows that under the condition that the misjudgment probability is 1%, the on-chip storage consumption of the FIB storage structure is 58.258MB, and the FIB storage structure can be deployed on a high-speed memory SRAM. Under the condition of supporting the LNPM mechanism, the actual throughput of the FIB storage structure is 1,164 ten thousand packets/second, and the requirement of the current Internet for rapidly processing the packets can be met. Therefore, the FIB storage structure of the content router and the data processing method thereof designed by the invention are suitable for the hardware level of the current memory, can effectively support an LNPM mechanism, and improve the retrieval speed of name data.
While the present invention has been described with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments, which are illustrative only and not restrictive, and various modifications which do not depart from the spirit of the present invention and which are intended to be covered by the claims of the present invention may be made by those skilled in the art.

Claims (2)

1. A data processing method of a content router forwarding information base storage structure is characterized by comprising the steps of retrieving forwarding information of an interest packet and updating the forwarding information; wherein:
the retrieval of the forwarding information of the interest packet comprises the following steps:
step 1-1, inputting name data: inputting the content name of the interest packet into a forwarding information base storage structure;
step 1-2, input processing of an on-chip storage unit: an input unit of the on-chip storage unit splits the content name vector of the interest packet into a plurality of sub-vectors, and then performs bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step 1-3, model operation of an on-chip memory unit: inputting the fixed-length input vector obtained in the step 1-2 into a model unit of an on-chip storage unit, and calculating a predicted cumulative distribution function value;
step 1-4, mapping the position of an on-chip storage unit: the output unit of the on-chip storage unit multiplies the predicted cumulative distribution function value in the step 1-3 by the total number of the slots in the mapping table to obtain the mapping position in the mapping table;
step 1-5, judging the existence of data of an on-chip storage unit: if the value of the position in the mapping table is not 0, the name prefix corresponding to the interest packet is in the storage structure of the forwarding information base, and the step 1-6 is executed; if the value of the position in the mapping table is 0, the name prefix corresponding to the interest packet is not in the forwarding information base storage structure, and the step 1-8 is carried out;
step 1-6, calculating the index address of the on-chip memory unit: calculating the part of the position in the mapping table, and solving an index address according to a base address corresponding to the part of the position and the actual memory address offset recorded in the position;
step 1-7, the forwarding information of the off-chip storage unit is output: accessing the corresponding address in the out-of-chip forwarding information base storage pool according to the index address, outputting next-hop routing forwarding information, and turning to the step 1-9;
step 1-8, outputting a retrieval result as follows: cannot be matched;
1-9, finishing the operation of searching the forwarding information of the interest packet;
the updating of the forwarding information comprises the following steps:
step 2-1, inputting name data and update types: inputting the content name to be updated and the corresponding update type into a forwarding information base storage structure;
step 2-2, input processing of the on-chip storage unit: splitting the content name vector into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to obtain a fixed-length input vector;
step 2-3, model operation of the on-chip memory unit: inputting the fixed-length input vector into a model unit of an on-chip storage unit, and calculating a predicted cumulative distribution function value;
step 2-4, mapping the position of the on-chip storage unit: the output unit of the on-chip storage unit multiplies the predicted cumulative distribution function value in the step 2-3 by the total number of the slots in the mapping table to obtain a mapping position in the mapping table, and calculates the part of the position in the mapping table;
step 2-5, judging the updating type of the on-chip storage unit: if the update type is addition, executing the step 2-6; if the update type is modified, go to step 2-8; if the update type is delete, go to step 2-9;
step 2-6, adding operation of on-chip memory cells: inserting the updated content name into an index structure corresponding to the prefix length of the content name in the chip, and obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position;
step 2-7, adding operation of off-chip storage units: accessing the corresponding address in the out-of-slice forwarding information base storage pool according to the index address, inserting corresponding route forwarding information, and turning to the step 2-11;
step 2-8, modifying operation of the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, accessing a corresponding address in an off-chip forwarding information base storage pool according to the index address, modifying corresponding route forwarding information, and turning to the step 2-11;
step 2-9, deleting operation of the off-chip storage unit: obtaining an index address according to a base address corresponding to the part of the position in the mapping table and the actual memory address offset recorded in the position, and deleting corresponding routing forwarding information according to the index address accessing the corresponding address in the off-chip forwarding information base storage pool;
step 2-10, deleting operation of the on-chip memory unit: clearing the corresponding position in the mapping table;
and 2-11, finishing the updating operation of the forwarding information.
2. A content router forwarding information base storage structure chip for implementing the data processing method of claim 1, comprising an on-chip storage unit and an off-chip storage unit; the method is characterized in that:
the on-chip storage unit uses a high-speed memory, wherein a plurality of index structures corresponding to different numbers of name prefix components are arranged, so as to realize the quick index of the name data based on the longest name prefix matching mechanism;
the index structure in the on-chip storage unit is realized on the basis of a neural network, and comprises an input unit, a model unit and an output unit;
the input unit is used for converting the routing table index data into input vectors, dividing each piece of index data into a plurality of sub-vectors, and then performing bitwise XOR operation on elements at the same position in all the sub-vectors to finally obtain the input vector corresponding to the index data;
the model unit is used for training and predicting cumulative distribution function values, and comprises a first-stage neural network and a plurality of second-stage neural networks;
the output unit multiplies the cumulative distribution function value predicted by the model unit by the total number of the slots in the mapping table to obtain a mapping position in the mapping table, and then obtains a final index address according to a base address corresponding to a part where the position is located and the actual memory address offset recorded in the position;
the off-chip storage unit uses a low-speed memory with a storage space reaching one gigabyte, and deploys a plurality of forwarding information base storage pools corresponding to the index structures to store actual forwarding information; meanwhile, the off-chip storage unit processes misjudgment possibly generated by the on-chip index structure, and a chain address method is used in an off-chip forwarding information base storage pool to process conflicts, namely data mapped to the same address are connected in a form of a linked list.
CN201910572071.1A 2019-06-28 2019-06-28 Data processing method and chip for forwarding information base storage structure of content router Active CN110460529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910572071.1A CN110460529B (en) 2019-06-28 2019-06-28 Data processing method and chip for forwarding information base storage structure of content router

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910572071.1A CN110460529B (en) 2019-06-28 2019-06-28 Data processing method and chip for forwarding information base storage structure of content router

Publications (2)

Publication Number Publication Date
CN110460529A CN110460529A (en) 2019-11-15
CN110460529B true CN110460529B (en) 2021-06-08

Family

ID=68481819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910572071.1A Active CN110460529B (en) 2019-06-28 2019-06-28 Data processing method and chip for forwarding information base storage structure of content router

Country Status (1)

Country Link
CN (1) CN110460529B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126625B (en) * 2019-12-20 2022-05-20 华中科技大学 Extensible learning index method and system
CN113220679A (en) * 2021-04-29 2021-08-06 天津大学 Mixed FIB storage structure facing multi-mode network and data processing method thereof
CN113220683A (en) * 2021-05-08 2021-08-06 天津大学 Content router PIT structure supporting flooding attack detection and data retrieval method thereof
CN115473846A (en) * 2022-09-06 2022-12-13 国网河北省电力有限公司电力科学研究院 Router forwarding information retrieval method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7551609B2 (en) * 2005-10-21 2009-06-23 Cisco Technology, Inc. Data structure for storing and accessing multiple independent sets of forwarding information
CN101552840A (en) * 2009-03-09 2009-10-07 北京天碁科技有限公司 A method to reduce the power consumption of mobile terminals and the mobile terminals
US7606236B2 (en) * 2004-05-21 2009-10-20 Intel Corporation Forwarding information base lookup method
CN104780101A (en) * 2015-01-30 2015-07-15 天津大学 FIB (Forward Information Base) table structure in named data networking forwarding plane and retrieval method thereof
CN104794100A (en) * 2015-05-06 2015-07-22 西安电子科技大学 Heterogeneous multi-core processing system based on on-chip network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7424014B2 (en) * 2002-11-12 2008-09-09 Cisco Technology, Inc. System and method for local packet transport services within distributed routers
US20130219038A1 (en) * 2012-02-17 2013-08-22 Electronics And Telecommunications Research Institute Router based on core score and method for setting core score and providing and searching content information therein
US9819643B2 (en) * 2014-10-13 2017-11-14 Telefonaktiebolaget L M Ericsson (Publ) CCN name patterns
US10069729B2 (en) * 2016-08-08 2018-09-04 Cisco Technology, Inc. System and method for throttling traffic based on a forwarding information base in a content centric network
US20180091405A1 (en) * 2016-09-29 2018-03-29 Alcatel-Lucent Canada, Inc. Software method to detect and prevent silent datapath ip route lookup failures

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606236B2 (en) * 2004-05-21 2009-10-20 Intel Corporation Forwarding information base lookup method
US7551609B2 (en) * 2005-10-21 2009-06-23 Cisco Technology, Inc. Data structure for storing and accessing multiple independent sets of forwarding information
CN101552840A (en) * 2009-03-09 2009-10-07 北京天碁科技有限公司 A method to reduce the power consumption of mobile terminals and the mobile terminals
CN104780101A (en) * 2015-01-30 2015-07-15 天津大学 FIB (Forward Information Base) table structure in named data networking forwarding plane and retrieval method thereof
CN104794100A (en) * 2015-05-06 2015-07-22 西安电子科技大学 Heterogeneous multi-core processing system based on on-chip network

Also Published As

Publication number Publication date
CN110460529A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110460529B (en) Data processing method and chip for forwarding information base storage structure of content router
JP3735471B2 (en) Packet relay device and LSI
CN103238145B (en) High-performance in network is equipped, the renewable and method and apparatus of Hash table that determines
Bando et al. Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps
Bando et al. FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
CA2842555A1 (en) Methods and systems for network address lookup engines
CN113220679A (en) Mixed FIB storage structure facing multi-mode network and data processing method thereof
JP3881663B2 (en) Packet classification apparatus and method using field level tree
CN113806403B (en) Method for reducing search matching logic resources in intelligent network card/DPU
US7551609B2 (en) Data structure for storing and accessing multiple independent sets of forwarding information
CN104780101A (en) FIB (Forward Information Base) table structure in named data networking forwarding plane and retrieval method thereof
Sun et al. An on-chip IP address lookup algorithm
JP2006246488A (en) Network router, address processing method, and computer program
Veeramani et al. Efficient IP lookup using hybrid trie-based partitioning of TCAM-based open flow switches
JP3970448B2 (en) Information relay method and apparatus
CN112667640B (en) Routing address storage method and device
Ghosh et al. A hash based architecture of longest prefix matching for fast IP processing
CN109194574B (en) IPv6 route searching method
Mahini et al. MLET: a power efficient approach for TCAM based, IP lookup engines in Internet routers
Vijay et al. Implementation of memory-efficient linear pipelined IPv6 lookup and its significance in smart cities
Smiljanić et al. A comparative review of scalable lookup algorithms for IPv6
CN102739551B (en) Multi-memory flow routing architecture
Vijay et al. A Memory-Efficient Adaptive Optimal Binary Search Tree Architecture for IPV6 Lookup Address
Pao et al. Bit-shuffled trie: IP lookup with multi-level index tables
CN110474844B (en) Training method and chip for learning type index data structure of high-performance intelligent router

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant