CN108833299A - A kind of large scale network data processing method based on restructural exchange chip framework - Google Patents
A kind of large scale network data processing method based on restructural exchange chip framework Download PDFInfo
- Publication number
- CN108833299A CN108833299A CN201711448872.4A CN201711448872A CN108833299A CN 108833299 A CN108833299 A CN 108833299A CN 201711448872 A CN201711448872 A CN 201711448872A CN 108833299 A CN108833299 A CN 108833299A
- Authority
- CN
- China
- Prior art keywords
- message
- heading
- slice
- restructural
- micro
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/215—Flow control; Congestion control using token-bucket
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/22—Traffic shaping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3009—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
Abstract
The invention discloses a kind of large scale network data processing methods based on restructural exchange chip framework:(1), multichannel message is received from physical link and store;(2), message is divided into N number of packet slice, when packet slice is greater than 1, executes step (3)~(5) and step (6);Otherwise, step (4) and step (6) are executed;(3), it will be stored comprising the packet slice of message data payload, and respective stored address pointer information increased in heading slice;(4), it is sliced one sequence number of distribution to heading, analytic message head obtains type of message, according to type of message, is independently parsed, is classified to heading parallel, forward process, to update heading slice;(5), message data payload is extracted, it is spliced into complete message with corresponding heading;(6), the serial number carried according to heading carries out the message of parallel processing to forward after traffic shaping, queue management processing in sequence.
Description
Technical field
The present invention relates to a kind of large scale network data processing methods based on restructural exchange chip framework, belong to wired
Field of communication technology.
Background technique
With the development of world economy and science and technology, the network user's is rapid soaring, and people need the function of network service
It asks and bandwidth demand is also continuously increased, this just proposes bigger challenge to the development of network technology, is constantly increasing bandwidth
Under the premise of, more and more network protocols and network structure more complicated and changeable are proposed, so that such as to various network entities
The programmability of router, interchanger and gateway etc. and multi-functional requirement constantly increase, in order to meet growing network
There is restructural exchange chip framework in demand.Its powerful data high-speed processing capacity of restructural exchange chip is mainly logical
Cross core Embedded multi-microprocessor while using some hardware-accelerated technologies, in the microprocessor comprising multiple hardware lines
Journey is realized;And the bigger freedom degree of designer can be given by using dedicated association's processing unit.Developer can use
Restructural exchange chip realizes rapidly programming, flexibly provides client's required function, it enables network system to have high property
Energy and high flexibility.
Restructural exchange chip carries various Message processing tasks, how effectively to support message forwarding, routing table lookup, stream
The reconstruct of the business such as buret reason is realized, how under the premise of guaranteeing Message processing performance, promotes restructural exchange by optimization
The flexibility of chip, so that supporting large scale network data processing is the difficult point that restructural exchange chip is realized.
Summary of the invention
Technology of the invention solves the problems, such as:For requirement of the restructural exchange chip in terms of performance and flexibility,
On the basis of analyzing restructural chip message forward process work characteristics, propose a kind of based on the big of restructural exchange chip framework
Scale network data processing method realizes flexibility under the premise of guaranteeing Message processing performance.
The technical solution of the invention is as follows:A kind of large scale network data processing based on restructural exchange chip framework
Method includes the following steps:
(1), multichannel message is received from physical link and store;
(2), the message by step (1) storage is divided into N number of packet slice according to preset slice size, N >=1, each
The size of slice is more than or equal to the size of heading, when packet slice is greater than 1, executes step (3)~(5) and step (6);It is no
Then, step (4) and step (6) are executed;
(3), it will be stored comprising the packet slice of message data payload, and corresponding message data payload storage address referred to
Needle information increases in the slice of the heading comprising heading;
(4), to heading slice one sequence number of distribution comprising heading information, analytic message head obtains message class
Type parallel independently parses heading, is classified, forward process according to type of message, to update heading slice;
(5), it is net to extract message data from caching for the message data payload storage address information carried according to heading
It is spliced into complete message with corresponding heading by lotus;
(6), the message of parallel processing, is carried out traffic shaping, queue pipe by the serial number carried according to heading in sequence
It is divided into multichannel after reason processing to forward.
Step (1) is implemented as:
(1.1), message is received from physical link by multiple ports;
(1.2), to received message, message identification, verification and filtering is carried out, invalid message is filtered out, it will be remaining
Effective message, which is stored in, receives buffer area;
(1.3), one circuit-switched data is accumulated according to arrival time sequencing to each road message;
(1.4), the message obtained to step (1.3) caches in order.
The step (4) parallel independently parses heading, is classified, at forwarding using multiple parallel micro engines
Reason, specially:
(4.1), the heading received is submitted to the thread free time by the thread work state of each micro engine per thread of poll
The more micro engine of number;
(4.2), the micro engine for receiving message loads corresponding micro-code instruction, according to micro-code instruction, dispatch multiple threads with
Relevant entries in rotation non-preemption mode access storage module in respective memory unit, completion heading data frame analyzing,
Classification and forward process, to update heading slice.
It is worked between thread inside each micro engine using pipeline work.
To rotate the correlation table in a manner of non-preemption in access storage module in respective memory unit in the step (4.2)
Specific method be:
(4.2.1), record all micro engines be in prepare access memory in state of memory cells thread number and
It needs the storage unit accessed;
Whether (4.2.2), the poll storage unit are in accessed state, when there is thread to complete to the storage unit
When access, sequential hunting one thread for preparing to access the storage unit, gives access right to the line in the thread number of record
Journey.
The storage unit includes the DDR memory for storing vlan table, MPLS table, when micro engine access DDR storage
When device, micro engine specifies search for engine using hashing algorithm or binary tree search algorithm pair by calling search engine first
List item in DDR scans for, and searches the list item to match with heading handled by micro engine, and search result is fed back to
Micro engine.
The micro engine is integrated on one chip.
The chip interior is equipped with the special instruction set handled specifically for network packet, and the special instruction set includes
Multiplying order, cyclic redundancy check instruction, content addressing instruction, FFS instruction, micro engine is according to micro-code instruction, scheduling thread
These instructions are executed, corresponding Message processing is completed.
The step (6) carries out traffic shaping processing to message using token bucket algorithm priority-based.
The step (6) uses priority query, the weighted queue based on stream, Fair Queue, PQ CQ queuing method
Queue management is carried out to message.
Compared with the prior art, the invention has the advantages that:
(1), the present invention inherits restructural core concept, and data forwarding and control are separated, main operation in terms of data
On micro engine processing core, realizes the processing function of high speed forward data packet between input port and output port, held with linear speed
Row feature, and the independence of data packet is made full use of, parallel processing manner is taken, control aspect operates in hardware co-processor
On, processing routing table lookup, traffic management, the QoS control for completing high level etc..
(2), Message processing of the invention is programmable, and microcode operates on micro engine, and the heavy duty of microcode is to be
System upgrading provides great convenience.
(3), the present invention in terms of protocol identification and classification, can according to the protocol type of message, port numbers, destination address,
And other information specific to agreement identify data packet.
(4), the present invention can carry out slicing treatment to message, can protect in Packet reassembling in terms of message dismounting and recombination
Demonstrate,prove the transfer sequence of message.
(5), the present invention simultaneously holds multiple headings using multiple parallel micro engines in terms of message header processing
To the complete process at end, each micro engine includes multiple threads again, so that the line-speed processing of high bandwidth can be realized.
(6), the present invention can carry out shaping to flow by certain agreements or application requirement to be allowed to meet time delay in output
Message is sent to carry out priority processing etc. in corresponding queue after traffic shaping with the requirement of delay variation, thus
Realize QoS guarantee.
(7), the present invention carries out association's processing to particular task using specialized hardware acceleration processing unit, as search engine SE,
Order-preserving engine OE, traffic shaping TM, queue management QM, to improve processing speed.
Detailed description of the invention
Fig. 1 is a kind of large scale network data processing figure based on restructural exchange chip framework of the present invention.
Fig. 2 is traffic management of embodiment of the present invention flow chart;
Fig. 3 is queue management of embodiment of the present invention flow chart;
Fig. 4 is the system of large scale network data processing method of the embodiment of the present invention based on restructural exchange chip framework
Block diagram.
Specific embodiment
Just the present invention is described further with reference to the drawings and specific embodiments below.
As shown in Figure 1, the present invention provides a kind of large scale network data processings based on restructural exchange chip framework
Method, specific step is as follows:
(1), multichannel message is received from physical link and is stored;Specially:
(1.1), message is received from multiple ports;
(1.2), to received message, message identification, verification and filtering is carried out, invalid message is filtered out, it will be remaining
Effective message, which is stored in, receives buffer area;
(1.3), one circuit-switched data is accumulated according to arrival time sequencing to each road message;;
(1.4), the message obtained to step (1.3) caches in order.
(2), the message by step (1) storage is divided into N number of packet slice according to preset slice size, N >=1, each
The size of slice is more than or equal to the size of heading, when packet slice is greater than 1, executes step (3)~(5) and step (6);It is no
Then, step (4) and step (6) are executed;
(3), it will be stored comprising the packet slice of message data payload, and corresponding message data payload storage address referred to
Needle information increases in the packet slice comprising heading;
(4), to heading slice one sequence number of distribution comprising heading information, analytic message head obtains message class
Type parallel independently parses heading, is classified, forward process according to type of message, to update heading slice;
Independently heading is parsed parallel using multiple parallel micro engines, is classified, the specific side of forward process
Method is:
(4.1), the heading received is submitted to the thread free time by the thread work state of each micro engine per thread of poll
The more micro engine of number;
(4.2), the micro engine for receiving message loads corresponding micro-code instruction, according to micro-code instruction, dispatch multiple threads with
Relevant entries in rotation non-preemption mode access storage module in respective memory unit, completion heading data frame analyzing,
Classification and forward process, to update heading slice.Specific method is:
(4.2.1), record all micro engines be in prepare access memory in state of memory cells thread number and
It needs the storage unit accessed;
Whether (4.2.2), the poll storage unit are in accessed state, when there is thread to complete to the storage unit
When access, sequential hunting one thread for preparing to access the storage unit, gives access right to the line in the thread number of record
Journey.
The storage unit includes the DDR memory for storing vlan table, MPLS table, when micro engine access DDR storage
When device, micro engine specifies search for engine using hashing algorithm or binary tree search algorithm pair by calling search engine first
List item in DDR scans for, and searches the list item to match with heading handled by micro engine, and search result is fed back to
Micro engine.
It is worked between thread inside each micro engine using pipeline work.The micro engine is integrated in one
On block chip.The chip interior is equipped with the special instruction set handled specifically for network packet, the special instruction set packet
Multiplying order, cyclic redundancy check instruction, content addressing instruction, FFS instruction are included, micro engine dispatches line according to micro-code instruction
These instructions of Cheng Zhihang, complete corresponding Message processing.
(5), it is net to extract message data from caching for the message data payload storage address information carried according to heading
It is spliced into complete message with corresponding heading by lotus;
(6), the message of parallel processing, is carried out traffic shaping, queue pipe by the serial number carried according to heading in sequence
It is divided into multichannel after reason processing to forward.
As shown in Fig. 2, specific token bucket algorithm priority-based is to message progress traffic shaping processing method:It is first
First classified according to preset matching rule to message, the place of token bucket is needed not move through to the message for not meeting matching rule
Reason, directly transmits;To the message for meeting matching rule, then need to be handled using token bucket.When there is enough tokens in bucket
When, message can continue to send, while the token amount in token bucket done by message length it is corresponding less;Work as token bucket
In token deficiency when, message cannot be sent, and only wait until to generate new token in bucket, message can just be sent.This
The flow for limiting message can only be less than the speed generated equal to token, to achieve the purpose that limit flow.Message is through flowing
Amount shaping is transmitted to QM module.
The step (6) uses priority query, the weighted queue based on stream, Fair Queue, PQ or CQ queuing method
Queue management is carried out to message.
Based on a kind of above-mentioned large scale network data processing method based on restructural exchange chip framework, the present invention is provided
A kind of large scale network data processing system based on restructural exchange chip framework, system structure is as shown in figure 4, this is
System includes the port XGE1~XGEn, MAC module (Medium Access Control), convergence module RMUX (Roll
Multiplexer), input buffer module IBM (Ingress Buffer Management), packet parsing module PA (Packet
Analysis), polling dispatching module PBA (Packet Bus Allocation), sequence guarantee engine modules OE (Order-
Preserving Engine), micro engine cluster module NPE (Network Processing Engine), message editing module PE
(Packet Editing), traffic shaping module TM (Traffic Management), queue management module QM (Queue
Management), output buffer module EBM (Egress Buffer Management).
S1:The port XGE1~XGEn:It receives message and is sent to MAC module, XEG is: Ten-Gigabit
Etherent;
S2:MAC (Medium Access Control) module carries out message identification, verification and mistake to received message
Filter, filters out invalid message, and remaining effective message is stored in and receives buffer area.MAC module is by control module, hair
3 module, receiving module module sections are sent, supports full duplex communication.
Control module, include general processor interface, register etc., for realizing general processor to the control of MAC at
Reason;The statistics letters such as the counting messages of interface transmitting-receiving, including unicast, multicast, broadcast, short packet, long packet, CRC correct/error are also provided
Breath.
Sending module, the main transmission for completing data frame read data from sending buffer area as unit of byte, filling with
Too frame CRC and lead code, and the mode for being converted into physical layer XGE transmits guarantee two by frame gap counter when transmission
Minimum interval between a Ether frame;
Receiving module, the main reception for completing data frame, from physical layer XGE interface data, and carry out message identification,
Verification and filtering, message is stored in and receives buffer area.
S3:Convergence module RMUX (Roll Multiplexer) collects each road message according to arrival time sequencing
At a circuit-switched data, it is sent to IBM module later;
S4:Input buffer module IBM (Ingress Buffer Management), incoming message is cached in order into
Row caching, while it being divided into N number of packet slice according to preset slice size, the size of N >=1, each slice is more than or equal to report
The size of literary head, general slice size are 80 bytes, and packet parsing module PA will be sent to after slicing treatment;
S5:Packet parsing module PA (Packet Analysis) will be net comprising message data when packet slice is greater than 1
The packet slice of lotus is stored into RB (Resource Buffer) module, and corresponding message data payload storage address is referred to
Needle information increases in the packet slice comprising heading;Parsing obtains type of message, and type of message includes ARP (Address
Resolution Protlcol)、IPV4(Internet Protocol Version 4)、IPV6(Internet Protocol
Version 6), type of message is parsed, heading is forwarded to polling dispatching module PBA.
Further, it is parsed by PA, if the processing that discovery message needs 4 layer protocols or more after NPE resume module
Message is sent to the protocol processes that general processor carries out higher level.
S6:Polling dispatching module PBA (Packet Bus Allocation), polling dispatching module (PBA), polled network
The thread work state of each micro engine per thread inside message header processing device is distributed the heading received to a sequence and is guaranteed
The serial number that engine modules (OE) are sent, is submitted to the more micro engine of thread free time number;
S7:Sequence guarantees engine modules OE (Order-preserving Engine):To prevent message by micro engine
Random ordering occurs after processing, distributes a sequence number to each heading before message enters micro engine, sends polling dispatching
Module PBA.
S8:Network message head processor, parallel independently parses heading, is classified, forward process, to update report
Literary head slice, by treated, heading slice is sent to message editing module PE.
Network message head processor includes micro engine cluster module NPE, task scheduling modules RBA and storage unit.Wherein:
Micro engine cluster module NPE (Network Processing Engine), is made of, often multiple parallel micro engines
A micro engine completes the complete process of a message, and each micro engine includes multiple threads, with assembly line work between per thread
It works as mode.The micro engine for receiving message loads corresponding micro-code instruction from command memory IMEM, according to micro-code instruction,
Multiple threads are dispatched to rotate in a manner of non-preemption in access storage module in respective memory unit by task scheduling modules RBA
Relevant entries, heading data frame analyzing, classification and forward process are completed, to update heading slice.After processing
Heading is sent to PE module.
Task scheduling modules RBA rotates the relevant entries in non-preemption mode access storage module in respective memory unit
Specific method be:RBA record all micro engines be in prepare access memory in state of memory cells thread number and
It needs the storage unit accessed;Whether the poll storage unit is in accessed state, when have thread complete to the storage
When the access of unit, sequential hunting one thread for preparing to access the storage unit, access right is handed in the thread number of record
Give the thread.
Storage unit includes DDR memory, TCAM, on-chip memory LMEM.Wherein:
DDR memory, for storing the business such as vlan table, MPLS table correlation and requiring processing speed relatively low table
?;Micro engine specifies search for engine using corresponding searching algorithm to DDR by task dispatcher calling search engine
List item in (Double Data Rate) scans for, and searches the list item to match with heading handled by micro engine, and
Search result is fed back into micro engine.
TCAM (Ternary Content Addressable Memory) memory, for storing MAC address table, road
By the list item more demanding to processing speed such as table.The mac address table, routing table are stored using TCAM form, when lookup, are appointed
Information in heading is converted into the storage of TCAM table, matched with mac address table, routing table, required for finding by scheduler module of being engaged in
Data Matching item feed back to micro engine.
On-chip memory LMEM (Local Memory) directly passes through task by the thread of micro engine for storing flow table
Scheduler accesses.
S9:Message editing module PE (Packet Editing), modifies the data content of heading, is carried according to heading
Message data payload storage address information, from caching extract message data payload, it is spliced into corresponding heading
Complete message re-sends to traffic shaping module TM;
S10:Traffic shaping module TM (Traffic Management) carries out traffic shaping to message, and will be after shaping
Message be sent to queue management module QM;
Specially:As shown in Fig. 2, token bucket algorithm guarantees that the specific method of network QoS is priority-based:Root first
Classify according to preset matching rule to message, the processing of token bucket needed not move through to the message for not meeting matching rule,
It directly transmits;To the message for meeting matching rule, then need to be handled using token bucket.When there is enough tokens in bucket,
Message can continue to send, at the same the token amount in token bucket done by message length it is corresponding less;When in token bucket
Token deficiency when, message cannot be sent, and only wait until to generate new token in bucket, message can just be sent.This is just
The flow for limiting message can only be less than the speed generated equal to token, to achieve the purpose that limit flow.
S11:Queue management module QM (Queue Management) carries out queue management to message, and will pass through queue
The message of management is sent to EBM (Egress Buffer Management) module;
Specially:As shown in figure 3, first according to index creation queue, in interface there is no when queue congestion, message
It is transmitted immediately after arrival, in case of congestion, can classify to message, send to different queues, queue scheduling
Mechanism is handled the message to different priorities respectively, and the high queue of priority can obtain priority processing, in the length of queue
After degree reaches some maximum value, RED WRED strategy can be used and carry out packet loss processing, avoid network over loading.Message
Through being transmitted to EBM module after queue management.
S12:Output buffer module EBM caches the message of output, and is sent to MAC module.
S13:MAC module receives message and is stored to sending in buffer area, reads data from buffer area is sent, fill out
Ether frame CRC (Cyclic Redundancy Check) and lead code are filled, and the mode for being converted into physical layer XGE transmits.
Further, in the large scale network data processing system shown in Fig. 4 based on restructural exchange chip framework, box
Interior is processing unit in piece, is processing out of chip unit outside box.
Further, in the large scale network data processing system shown in Fig. 4 based on restructural exchange chip framework, OE,
SE, TM, QM are hardware co-processor.
Further, in the large scale network data processing system shown in Fig. 4 based on restructural exchange chip framework, TM and
QM can both be realized in piece or be realized outside piece, such as be realized outside piece, and exchange chip processing speed is higher, and power consumption is more
It is small.
The present invention can meet height using optimization system framework, special instruction set and hardware cell for packet data processes
The requirement of fast data grouping line-speed processing.Using editable message processing method, including editable packet parsing, message
It is forwarded with lookup, message editing and message, so that Message processing is more flexible, speed faster, is more suitable at large scale network data
Reason.Wherein, high speed, high-capacity intelligent data bag processing function, including packet parsing, classification and forwarding etc. are completed by micro engine;
Some complicated and frequent operation function such as routing table lookups, grouping order-preserving, traffic management, queue management are handled using hardware association
Device further increases process performance, to realize service flexibility and high performance combination.
Unspecified part of the present invention belongs to common sense well known to those skilled in the art.
Claims (10)
1. a kind of large scale network data processing method based on restructural exchange chip framework, it is characterised in that including walking as follows
Suddenly:
(1), multichannel message is received from physical link and store;
(2), the message that step (1) stores is divided into N number of packet slice, N >=1, each slice according to preset slice size
Size be more than or equal to the size of heading and execute step (3)~(5) and step (6) when packet slice is greater than 1;Otherwise,
Execute step (4) and step (6);
(3), it will be stored comprising the packet slice of message data payload, and corresponding message data payload memory address pointer believed
Breath increases in the slice of the heading comprising heading;
(4), to heading slice one sequence number of distribution comprising heading information, analytic message head obtains type of message, root
According to type of message, independently heading is parsed parallel, is classified, forward process, to update heading slice;
(5), the message data payload storage address information carried according to heading extracts message data payload from caching, will
It is spliced into complete message with corresponding heading;
(6), the message of parallel processing is carried out traffic shaping, at queue management by the serial number carried according to heading in sequence
It is divided into multichannel after reason to forward.
2. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 1,
It is characterized in that:Step (1) is implemented as:
(1.1), message is received from physical link by multiple ports;
(1.2), to received message, message identification, verification and filtering is carried out, invalid message is filtered out, it will be remaining effective
Message be stored in receive buffer area;
(1.3), one circuit-switched data is accumulated according to arrival time sequencing to each road message;
(1.4), the message obtained to step (1.3) caches in order.
3. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 1,
It is characterized in that:The step (4) parallel independently parses heading, is classified, being turned using multiple parallel micro engines
Hair processing, specially:
(4.1), the thread work state of each micro engine per thread of poll, by the heading received be submitted to thread free time number compared with
More micro engines;
(4.2), the micro engine for receiving message, which loads corresponding micro-code instruction, dispatches multiple threads according to micro-code instruction to rotate
Relevant entries in non-preemption mode access storage module in respective memory unit complete heading data frame analyzing, classification
And forward process, to update heading slice.
4. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 3,
It is characterized in that being worked between thread inside each micro engine using pipeline work.
5. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 3,
It is characterized in that:To rotate the phase in a manner of non-preemption in access storage module in respective memory unit in the step (4.2)
Close list item specific method be:
(4.2.1), the thread number and its need for recording the state of memory cells that all micro engines have been in preparation access memory
The storage unit to be accessed;
Whether (4.2.2), the poll storage unit are in accessed state, when there is thread to complete the access to the storage unit
When, sequential hunting one thread for preparing to access the storage unit, gives access right to the thread in the thread number of record.
6. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 5,
It is characterized in that the storage unit includes the DDR memory for storing vlan table, MPLS table, when micro engine access DDR is deposited
When reservoir, micro engine specifies search for engine using hashing algorithm or binary tree search algorithm by calling search engine first
List item in DDR is scanned for, searches the list item to match with heading handled by micro engine, and search result is fed back
To micro engine.
7. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 3,
It is characterized in that the micro engine is integrated on one chip.
8. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 7,
It is characterized in that the chip interior is equipped with the special instruction set handled specifically for network packet, the special instruction set packet
Multiplying order, cyclic redundancy check instruction, content addressing instruction, FFS instruction are included, micro engine dispatches line according to micro-code instruction
These instructions of Cheng Zhihang, complete corresponding Message processing.
9. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 1,
It is characterized in that:The step (6) carries out traffic shaping processing to message using token bucket algorithm priority-based.
10. a kind of large scale network data processing method based on restructural exchange chip framework according to claim 1,
It is characterized in that:The step (6) uses priority query, the weighted queue based on stream, Fair Queue, the queue side PQ CQ
Method carries out queue management to message.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448872.4A CN108833299B (en) | 2017-12-27 | 2017-12-27 | Large-scale network data processing method based on reconfigurable switching chip architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448872.4A CN108833299B (en) | 2017-12-27 | 2017-12-27 | Large-scale network data processing method based on reconfigurable switching chip architecture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108833299A true CN108833299A (en) | 2018-11-16 |
CN108833299B CN108833299B (en) | 2021-12-28 |
Family
ID=64153941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711448872.4A Active CN108833299B (en) | 2017-12-27 | 2017-12-27 | Large-scale network data processing method based on reconfigurable switching chip architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108833299B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684269A (en) * | 2018-12-26 | 2019-04-26 | 成都九芯微科技有限公司 | A kind of PCIE exchange chip kernel and working method |
CN110177046A (en) * | 2019-04-18 | 2019-08-27 | 中国人民解放军战略支援部队信息工程大学 | Secure exchange chip, implementation method and the network switching equipment based on mimicry thought |
CN110716797A (en) * | 2019-09-10 | 2020-01-21 | 无锡江南计算技术研究所 | DDR4 performance balance scheduling structure and method for multiple request sources |
CN111031044A (en) * | 2019-12-13 | 2020-04-17 | 浪潮(北京)电子信息产业有限公司 | Message analysis hardware device and message analysis method |
CN112995067A (en) * | 2021-05-18 | 2021-06-18 | 中国人民解放军海军工程大学 | Coarse-grained reconfigurable data processing architecture and data processing method thereof |
CN113037635A (en) * | 2019-12-09 | 2021-06-25 | 中国科学院声学研究所 | Multi-source assembling method and device for data block in ICN router |
CN113098798A (en) * | 2021-04-01 | 2021-07-09 | 烽火通信科技股份有限公司 | Method for configuring shared table resource pool, packet switching method, chip and circuit |
CN113691469A (en) * | 2021-07-27 | 2021-11-23 | 新华三技术有限公司合肥分公司 | Message out-of-order rearrangement method and single board |
CN113949669A (en) * | 2021-10-15 | 2022-01-18 | 湖南八零二三科技有限公司 | Vehicle-mounted network switching device and system capable of automatically configuring and analyzing according to flow |
WO2022174408A1 (en) * | 2021-02-20 | 2022-08-25 | 华为技术有限公司 | Switching system |
CN117319332A (en) * | 2023-11-30 | 2023-12-29 | 成都北中网芯科技有限公司 | Programmable hardware acceleration method for network message slicing and network processing chip |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040093602A1 (en) * | 2002-11-12 | 2004-05-13 | Huston Larry B. | Method and apparatus for serialized mutual exclusion |
CN1499792A (en) * | 2002-11-11 | 2004-05-26 | 华为技术有限公司 | Method for raising retransmission capability of network processor for servicing multiple data parts |
CN1558626A (en) * | 2004-02-10 | 2004-12-29 | 中兴通讯股份有限公司 | Method for realizing group control function by means of network processor |
CN1677952A (en) * | 2004-03-30 | 2005-10-05 | 武汉烽火网络有限责任公司 | Method and apparatus for wire speed parallel forwarding of packets |
CN101276294A (en) * | 2008-05-16 | 2008-10-01 | 杭州华三通信技术有限公司 | Method and apparatus for parallel processing heteromorphism data |
CN101442486A (en) * | 2008-12-24 | 2009-05-27 | 华为技术有限公司 | Method and apparatus for distributing micro-engine |
CN101616097A (en) * | 2009-07-31 | 2009-12-30 | 中兴通讯股份有限公司 | A kind of management method of output port queue of network processor and system |
EP2372962A1 (en) * | 2010-03-31 | 2011-10-05 | Alcatel Lucent | Method for reducing energy consumption in packet processing linecards |
CN105511954A (en) * | 2014-09-23 | 2016-04-20 | 华为技术有限公司 | Method and device for message processing |
CN106612236A (en) * | 2015-10-21 | 2017-05-03 | 深圳市中兴微电子技术有限公司 | Many-core network processor and micro engine message scheduling method and micro engine message scheduling system thereof |
-
2017
- 2017-12-27 CN CN201711448872.4A patent/CN108833299B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1499792A (en) * | 2002-11-11 | 2004-05-26 | 华为技术有限公司 | Method for raising retransmission capability of network processor for servicing multiple data parts |
US20040093602A1 (en) * | 2002-11-12 | 2004-05-13 | Huston Larry B. | Method and apparatus for serialized mutual exclusion |
CN1558626A (en) * | 2004-02-10 | 2004-12-29 | 中兴通讯股份有限公司 | Method for realizing group control function by means of network processor |
CN1677952A (en) * | 2004-03-30 | 2005-10-05 | 武汉烽火网络有限责任公司 | Method and apparatus for wire speed parallel forwarding of packets |
CN101276294A (en) * | 2008-05-16 | 2008-10-01 | 杭州华三通信技术有限公司 | Method and apparatus for parallel processing heteromorphism data |
CN101442486A (en) * | 2008-12-24 | 2009-05-27 | 华为技术有限公司 | Method and apparatus for distributing micro-engine |
CN101616097A (en) * | 2009-07-31 | 2009-12-30 | 中兴通讯股份有限公司 | A kind of management method of output port queue of network processor and system |
EP2372962A1 (en) * | 2010-03-31 | 2011-10-05 | Alcatel Lucent | Method for reducing energy consumption in packet processing linecards |
CN105511954A (en) * | 2014-09-23 | 2016-04-20 | 华为技术有限公司 | Method and device for message processing |
CN106612236A (en) * | 2015-10-21 | 2017-05-03 | 深圳市中兴微电子技术有限公司 | Many-core network processor and micro engine message scheduling method and micro engine message scheduling system thereof |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684269B (en) * | 2018-12-26 | 2020-06-02 | 成都九芯微科技有限公司 | PCIE (peripheral component interface express) exchange chip core and working method |
CN109684269A (en) * | 2018-12-26 | 2019-04-26 | 成都九芯微科技有限公司 | A kind of PCIE exchange chip kernel and working method |
CN110177046A (en) * | 2019-04-18 | 2019-08-27 | 中国人民解放军战略支援部队信息工程大学 | Secure exchange chip, implementation method and the network switching equipment based on mimicry thought |
CN110716797A (en) * | 2019-09-10 | 2020-01-21 | 无锡江南计算技术研究所 | DDR4 performance balance scheduling structure and method for multiple request sources |
CN113037635A (en) * | 2019-12-09 | 2021-06-25 | 中国科学院声学研究所 | Multi-source assembling method and device for data block in ICN router |
CN113037635B (en) * | 2019-12-09 | 2022-10-11 | 郑州芯兰德网络科技有限公司 | Multi-source assembling method and device for data block in ICN router |
CN111031044A (en) * | 2019-12-13 | 2020-04-17 | 浪潮(北京)电子信息产业有限公司 | Message analysis hardware device and message analysis method |
WO2022174408A1 (en) * | 2021-02-20 | 2022-08-25 | 华为技术有限公司 | Switching system |
CN113098798A (en) * | 2021-04-01 | 2021-07-09 | 烽火通信科技股份有限公司 | Method for configuring shared table resource pool, packet switching method, chip and circuit |
CN112995067A (en) * | 2021-05-18 | 2021-06-18 | 中国人民解放军海军工程大学 | Coarse-grained reconfigurable data processing architecture and data processing method thereof |
CN113691469A (en) * | 2021-07-27 | 2021-11-23 | 新华三技术有限公司合肥分公司 | Message out-of-order rearrangement method and single board |
CN113691469B (en) * | 2021-07-27 | 2023-12-26 | 新华三技术有限公司合肥分公司 | Message disorder rearrangement method and single board |
CN113949669A (en) * | 2021-10-15 | 2022-01-18 | 湖南八零二三科技有限公司 | Vehicle-mounted network switching device and system capable of automatically configuring and analyzing according to flow |
CN113949669B (en) * | 2021-10-15 | 2023-12-01 | 湖南八零二三科技有限公司 | Vehicle-mounted network switching device and system capable of automatically configuring and analyzing according to flow |
CN117319332A (en) * | 2023-11-30 | 2023-12-29 | 成都北中网芯科技有限公司 | Programmable hardware acceleration method for network message slicing and network processing chip |
CN117319332B (en) * | 2023-11-30 | 2024-04-02 | 成都北中网芯科技有限公司 | Programmable hardware acceleration method for network message slicing and network processing chip |
Also Published As
Publication number | Publication date |
---|---|
CN108833299B (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108833299A (en) | A kind of large scale network data processing method based on restructural exchange chip framework | |
CN108809854B (en) | Reconfigurable chip architecture for large-flow network processing | |
CN1875585B (en) | Dynamic unknown L2 flooding control with MAC limits | |
US9471388B2 (en) | Mapping network applications to a hybrid programmable many-core device | |
US7647472B2 (en) | High speed and high throughput digital communications processor with efficient cooperation between programmable processing components | |
US10057387B2 (en) | Communication traffic processing architectures and methods | |
CN103004158B (en) | There is the network equipment of programmable core | |
CN107689931A (en) | It is a kind of that Ethernet exchanging function system and method are realized based on domestic FPGA | |
US6604147B1 (en) | Scalable IP edge router | |
CN108462646B (en) | Message processing method and device | |
CN101136854B (en) | Method and apparatus for implementing data packet linear speed processing | |
CN108475244A (en) | Accelerate network packet processing | |
US20140181319A1 (en) | Communication traffic processing architectures and methods | |
CN108353029A (en) | For managing the method and system for calculating the data service in network | |
US20070153796A1 (en) | Packet processing utilizing cached metadata to support forwarding and non-forwarding operations on parallel paths | |
CN107181663A (en) | A kind of message processing method, relevant device and computer-readable recording medium | |
JP2004015561A (en) | Packet processing device | |
CN101242362B (en) | Find key value generation device and method | |
JP2014524688A (en) | Lookup front-end packet output processor | |
CN109768939A (en) | A kind of labeling network stack method and system for supporting priority | |
CN102970150A (en) | Extensible multicast forwarding method and device for data center (DC) | |
CN105991438B (en) | Treating method and apparatus based on data packet in virtual double layer network | |
CN106713144A (en) | Read-write method of message exit information and forwarding engine | |
CN109905321A (en) | A kind of route control system interacted for customized high-speed interface with Ethernet | |
WO1999059078A9 (en) | Digital communications processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |