CN111478858A - Flow processing method, flow Hash method and device - Google Patents

Flow processing method, flow Hash method and device Download PDF

Info

Publication number
CN111478858A
CN111478858A CN202010132940.1A CN202010132940A CN111478858A CN 111478858 A CN111478858 A CN 111478858A CN 202010132940 A CN202010132940 A CN 202010132940A CN 111478858 A CN111478858 A CN 111478858A
Authority
CN
China
Prior art keywords
stacking
board
plate
rate
links
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010132940.1A
Other languages
Chinese (zh)
Other versions
CN111478858B (en
Inventor
唐顺平
崔广美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Security Technologies Co Ltd
Original Assignee
New H3C Security Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Security Technologies Co Ltd filed Critical New H3C Security Technologies Co Ltd
Priority to CN202010132940.1A priority Critical patent/CN111478858B/en
Publication of CN111478858A publication Critical patent/CN111478858A/en
Application granted granted Critical
Publication of CN111478858B publication Critical patent/CN111478858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a traffic processing method, a traffic Hash method and a traffic Hash device, wherein the method is applied to a transmission network, the transmission network comprises a screen plate and two or more than two stacked plates which are in incidence relation with the screen plate, the number of links between the screen plate and each stacked plate and the total stacked port rate of each stacked plate are obtained, a rate weight ratio is determined according to the ratio of the total stacked port rate of each stacked plate, a stacked member list is determined according to the number of links between the screen plate and each stacked plate and the rate weight ratio, the traffic to be sent is sent according to the stacked member list, and the traffic can be reasonably loaded and shared according to the rate of the stacked ports on each stacked plate through the method.

Description

Flow processing method, flow Hash method and device
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a traffic processing method, a traffic Hash method, and an apparatus.
Background
As shown in fig. 1, in a distributed IRF (Intelligent Resilient framework), each member unit includes a service board, a network board and a plurality of stacking boards, where each network board has one or more links with each stacking board.
When the stacking ports are distributed on a plurality of stacking plates, in the prior art, the stacking plates are used as granularity, and the load sharing is performed on the flow according to the number of the stacking plates; in another prior art, all stacking ports distributed on different stacking plates are bundled into a stacking port group, and load sharing is performed according to the stacking port group. However, in the above embodiments, the speed difference of the stacking port is not considered, and the load sharing scheme cannot be adjusted according to the speed of the stacking port.
Disclosure of Invention
The embodiment of the disclosure provides a method for processing flow, which is applied to a transmission network, wherein the transmission network comprises a screen plate and two or more than two stacking plates which are associated with the screen plate.
The embodiment of the disclosure provides a method for processing traffic, which is applied to a transmission network, the transmission network comprises a network board and two or more than two stacked boards which are associated with the network board, and the method comprises the following steps:
acquiring flow to be sent;
acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate;
determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
sending the flow to be sent according to the stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
As can be seen from the foregoing embodiments, the present disclosure discloses a stacking member list, and the stacking member list is determined according to the number of links between the network board and each stacking board and the rate weight ratio, where the rate weight ratio is determined by the total stacking port rate ratio of each stacking board, so that when sending traffic according to the stacking member list, the traffic can be sent with load sharing at a reasonable rate of the stacking port on each stacking board.
In another embodiment, the determining a rate weight ratio according to a ratio of the total stacking port rates of the stacking plates specifically includes:
and comparing the total stacking port speed values of the stacking plates, rounding the comparison values, and taking the rounded ratio as a speed weight ratio.
In another embodiment, the determining a list of stacking members according to the number of links between the mesh plate and each stacking plate and the rate weight ratio includes:
and determining a stacking member list according to whether the rate weight proportions are the same or not and according to the number of links between the network board and each stacking board.
In another embodiment, when the rate weight ratios are the same, the determining a list of stacking members according to the number of links between the halftone and each stacking plate and the rate weight ratios specifically includes:
the stacking member list comprises actual interface members between the network board and each stacking board and virtual members between the network board and each stacking board, wherein the virtual members are virtual interface members which are virtualized for the stacking boards with the number of links between the network board and each stacking board being less than the reference number by taking the maximum number of links between the network board and each stacking board as the reference number, so that the member numbers corresponding to each stacking board in the stacking member list are the same.
In another embodiment, when the rate weight ratios are different, the determining a list of stacking members according to the number of links between the halftone and each stacking plate and the rate weight ratios specifically includes:
the stacking member list comprises actual interface members between the network board and each stacking board and virtual members between the network board and each stacking board, wherein the virtual members take the minimum number of links between the network board and each stacking board as a reference number, and the virtual interface members are virtualized out for each stacking board according to the reference number.
The virtualizing, for the stacking board whose number of links between the network board and the stacking board is greater than the reference number, a virtual interface member corresponding to the rate weight ratio specifically includes:
and respectively multiplying the total stacking port rate of each stacking plate by the reference number and then comparing to obtain a ratio result of the comparison, and virtualizing a virtual interface member according to the ratio result.
It can be seen from the above embodiments that the stack member list includes actual interface members and virtual members between the network board and each stack port, where the virtual members are determined according to whether the rate weight proportions are the same or not and according to the actual link numbers between the network board and each stack board, which further ensures that when the network board sends traffic according to the stack member list, the network board can reasonably share and send the traffic according to the rate of the stack port on each stack board.
The embodiment of the present disclosure further provides a Hash method of a flow, which is applied to a mesh board of an IRF network, where the IRF network further includes a service board and two or more stacked boards having an association relationship with the mesh board, and the method includes:
acquiring traffic to be sent from the service board;
acquiring the number of links between the stacking plates and the total stacking port rate of each stacking plate;
determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate;
determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
carrying out Hash sending on the flow to be sent according to the stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
It can be seen from the foregoing embodiments that, in this embodiment, the mesh board determines the stacking member list according to the number of links between the mesh board itself and each stacking board and the speed of the stacking port on each stacking board, and performs a Hash algorithm according to the stacking member list, thereby implementing that the mesh board reasonably shares and sends the traffic according to the speed of the stacking port on each stacking board.
The present disclosure also provides a device for traffic processing, the device being referenced to a mesh panel in a transport network, the transport network further comprising two or more stacked panels in an associated relationship with the mesh panel. The device comprises:
the acquisition module is used for acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
the calculation module is used for determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate, and determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight proportion;
a sending module, configured to send the traffic to be sent according to the stack member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
In another embodiment, the calculation module compares the total stacking port rate values of the stacking plates and rounds the ratio, and the rounded ratio is used as the rate weight ratio.
In another embodiment, the calculation module determines the list of stacking members according to whether the rate weight ratios are the same and according to the number of links between the mesh plate and each stacking plate.
Drawings
FIG. 1 is a schematic diagram of a network provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of traffic handling provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a network provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of traffic handling provided by an embodiment of the present disclosure;
fig. 5 is a flowchart of a Hash method of traffic according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, in an embodiment of traffic processing, a web panel obtains information of service boards and stacking boards linked to the web panel, where the information includes index numbers of the service boards and the number of the stacking boards, and takes the stacking boards as a granularity, when traffic needs to be sent, the index numbers of the service boards are used to touch the number of the stacking boards, and according to a result of the touch, it is determined that the service boards send the traffic through the stacking boards.
In this embodiment, if there is only one service board, the traffic sent by the service board is only sent to a certain stack board according to the above method; when there are multiple service boards, according to the above method, because only the stacking boards are used as the granularity, and the number of links between the network board and the stacking port rate on each stacking board are not considered, the traffic cannot be reasonably distributed according to the resource situation of the stacking boards, the load sharing transmission of the traffic cannot be really achieved, and it is very likely that the traffic is transmitted to the stacking board with a small number of links (the traffic on each link increases if the number of links between the network board and the stacking board is small), or the traffic is transmitted to the stacking board with a low stacking port rate (the transmission speed of the traffic is directly affected).
In another embodiment of traffic processing, a network board adds hardware table entries (e.g., a stack aggregation table and a stack port member table) and binds all stack ports on each stack board into a logical stack aggregation group through the added hardware table entries, and when sending traffic, sends the traffic according to the stack aggregation group.
In this embodiment, since there may be a great difference in the speed of each stacking port in the stacked aggregate group, for example, there are 3 stacking ports with a speed of 10G for stacked plate 1 in fig. 1, and 1 stacking port with a speed of 100G for stacked plate 2, although there is a total speed of 130G for stacked plate 1 and stacked plate 2 from a mathematical perspective, when bundled into one aggregate group, the bandwidth available for the aggregate group is only 60G (limited by the minimum speed of the stacked plate, which is not described herein), so that a great waste is caused to the actual bandwidth of the stacked plate.
In order to solve the technical problems that the flow cannot be reasonably distributed according to the resource condition of the stacked board, the load sharing and sending of the flow cannot be really achieved, the actual bandwidth of the stacked board is charged due to the fact that the stacked ports are bound into a polymerization group, and the like in the scheme, the disclosure provides the following embodiments for the technical problems.
As shown in fig. 2, the present disclosure provides an embodiment,
the present disclosure provides an embodiment for exemplarily setting up a traffic processing method protected by the present disclosure, as shown in fig. 2, the method is applied to a transmission network, the transmission network comprises a web panel and two or more stacked boards having an association relationship with the web panel, the method comprises:
s1, acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
s2, determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate;
s3, determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
s4, sending the flow to be sent according to the stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
It should be noted that, in an embodiment, the method described above can be directly applied to the screen board, in most cases, the screen board includes a central processing unit having an execution instruction, and a storage medium for storing the instruction, and the screen board can store the execution instruction for executing steps S1-S4 by using the storage medium, and be executed by the central processing unit; in another embodiment, the network management device may perform steps S1-S3, and send the obtained list of stacking members to the network board, and the network board performs traffic sending according to the received list of stacking members.
In step S1, the network board (or the network management device, the network management device and the network board obtain the stacking member list, for convenience of description, the network board will be described as an example later) obtains the number of links between the network board and each stacking board and the total stacking port rate of each stacking board by using its own configuration information or sending a probe message to the stacking board.
As shown in fig. 3, the network board obtains, through its port configuration information, 3 ports (it may be determined that there are 3 links between the network board and the stack board 1) between itself and the stack board 1, and 4 ports (it may be determined that there are 4 links between the network board and the stack board 2) between itself and the stack board 2, where the network board may search whether itself has recorded the stack rate of each stack board, and if not, the network board may send a rate query message to the stack board 1 and the stack board 2, and after receiving the rate query message sent by the network board, the stack board sends its stack rate information to the network board according to the rate query message, and in fig. 3, the total rate at which the network board obtains the stack board 1 is 30G, and the total rate at which the stack board 2 is 100G.
In step S2, the total rates of stacked plate 1 and stacked plate 2 obtained according to the above steps are 30G and 100G, respectively, and the above rates are compared, and stacked plate 1: stack 2 is 3:10, and this ratio is taken as the rate weight ratio, i.e. the rate weight ratio is 3: 10; in other embodiments, there may be a case that the ratio of the total rate of each stacked plate is not an integer, and for convenience of calculation, the ratio may be rounded, and the rounded ratio is used as the rate weight ratio.
In step S3, as can be seen from the rate weight ratios obtained in S2, the total stacking port rates of the two stacking plates, stacking plate 1 and stacking plate 2, are not the same, and the total stacking port rate of stacking plate 2 is higher than that of stacking plate 1, that is, stacking plate 2 can forward more traffic than stacking port 1. In another embodiment, the rates of the total stacking ports of the stacking plates may be the same, and then the rate weight ratios obtained according to the rates of the total stacking ports of the stacking plates may also be the same (the total rates of the stacking ports on the stacking plates 1 and 2 are both 100G, and then the rate weight ratio is 1: 1).
S31, judging whether the rate weight proportions are the same, if so, executing S32, otherwise, executing S33;
s32, the stacking member list comprises actual interface members between the screen plate and each stacking plate and virtual members between the screen plate and each stacking plate, wherein the virtual members are virtual interface members which are obtained by virtualizing the stacking plates, the link number between the screen plate and each stacking plate is less than the reference number, and the member number corresponding to each stacking plate in the stacking member list is the same by taking the maximum link number between the screen plate and each stacking plate as the reference number;
and S33, the stacking member list comprises actual interface members between the screen plate and each stacking plate and virtual members between the screen plate and each stacking plate, wherein the virtual members take the minimum number of links between the screen plate and each stacking plate as a reference number, and the virtual interface members are virtualized for each stacking plate according to the reference number.
When step S33 is executed, the total stack port rate of each stack board may be multiplied by the reference number and compared to obtain a ratio result of the comparison (for convenience of distinguishing, the ratio result is referred to as a target ratio result), and a virtual interface member is virtualized according to the target ratio result. In one implementation, each target ratio (i.e., a certain ratio in the target ratio result) in the target ratio result is divided by the common divisor, and a virtual interface member is virtualized according to the divided result.
To facilitate the description of the above calculation method, the present embodiment further provides a formula for determining the number of virtual output virtual interfaces required by each interface board, as shown in the following formula,
M:N=(A*K):(B*K)
wherein, M is the total number of the actual interface members and the virtual interface members between the screen plate and the stacking plate 1 in the embodiment, and N is the total number of the actual interface members and the virtual interface members between the screen plate and the stacking plate 2 in the embodiment;
a represents the total stacking port rate of stacked plate 1, and B represents the total stacking port rate of stacked plate 2;
k is a reference number.
The value of M, N is greater than or equal to the actual interface number (link number) between the net board and the stacking board 1 and stacking board 2.
It should be noted that, this embodiment only exemplifies the case where there are 2 stacked plates, and when the number of stacked plates is greater than 2, the implementation principle is the same as that described above, for example, if there is a stacked plate 3, the formula may be:
M:N:O=(A*K):(B*K):(C*K)
wherein, O is the total number of the actual interface members and the virtual interface members between the screen plate and the stacking plate 3; c is the total stack opening rate of the stack plate 3, and so on.
In connection with the present embodiment, since the rate weight ratio is 3:10, step S33 may be performed to obtain a list of stack members, as shown below,
list of stack members: { x0, x1, x2, y0, y1, y2, y3, y0 ', y 1', y2 ', y 3' y0 ', y 1' }
As can be seen from the above list, x0, x1, x2, y0, y1, y2 and y3 are actual interface members, which are all interface information linked by the web board to the stack board 1 and the stack board 2, while y0 ', y 1', y2 ', y 3' y0 "and y 1" are virtual members, which are interface information virtualized by the web board according to step S33.
The method for implementing the virtual interface information is to obtain the minimum number of links between the web board and each stacking board, which is 3, and with 3 as a reference number, the method can be known according to the formula:
M:N=(30*3):(100*3)=(90):(300)=3:10
according to the above result, the stack board 1 should include 3 interface members (actual interface member + virtual interface member), the stack board 2 should include 10 interface members (actual interface member + virtual interface member), it is known from the obtained actual number of links between the web board and the stack board 1 and the stack board 2 that there are 3 interface members between the web board and the stack board 1 and 4 interface members between the web board and the stack board 2, the M value is equal to the actual number of interfaces between the web board and the stack board 1, the condition is satisfied and the virtual interface between the web board and the stack board 1 does not need to be added, the N value is 6 more than the actual number of interfaces between the web board and the stack board 2, and 6 virtual interface members need to be virtualized between the web board and the stack board 2, thereby obtaining the stack member list.
It should be noted that, for convenience of description, the present embodiment is exemplarily illustrated by 2 stacked plates, and in practical applications, the number of the stacked plates may be greater than 2, and the principle is the same as that of the above embodiment.
When the network board forwards the flow, the reasonable sending of the flow can be realized through load sharing operation according to the stack member list obtained in the above steps.
For comprehensive illustration, the present disclosure provides another embodiment, the total stacking port rate of stacking plates 1 and 2 is 100G, so that the rate weight ratio obtained by the net plates is 1:1, and step S32 is performed.
In step S32, the maximum number of links 4 between the network board and each stack board is obtained, and with 4 as a reference number, the stack board 1 with the number of links smaller than the reference number is determined, and 1 virtual port is virtualized for the stack board 1, so that the number of virtual ports added to the actual number of ports on the stack board 1 is equal to the reference number, which is specifically as follows:
list of stack members: { x0, x1, x2, x 0', y0, y1, y2, y3}
And when the network board forwards the traffic, the reasonable sending of the traffic can be realized through load sharing operation according to the stack member list obtained in the step.
As can be seen from the above embodiments, in the present embodiment, the number of links between the mesh plate and the stacking plate and the rate of the total stacking port in each stacking plate are fully considered, and for the stacking plate with high total stacking port rate but few interfaces with the mesh plate, virtual ports are added, so that the stacking plate can process more flows; and for the stacked boards with the same total stacked port rate but less network board links, adding a virtual interface to enable the capability of processing the flow of each stacked port to be the same, and finally realizing the balanced (load sharing) sending of the flow by the network board through the dynamic adjustment mode.
As shown in fig. 5, in order to combine with practical applications, the present disclosure further provides a Hash method of traffic, the method is applied to a mesh board of an IRF network, the IRF network further includes a service board, and two or more stacked boards having an association relationship with the mesh board, the method includes: ,
s7, obtaining flow to be sent from a service board;
s8, acquiring the number of links between each stacking plate and the total stacking port rate of each stacking plate;
s9, determining a rate weight proportion according to the ratio of the total stacking port rates of all the stacking plates;
s10, determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
s11, carrying out Hash sending on the flow to be sent according to a stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
In step S7, the network board receives the traffic to be sent from the service board, performs Hash operation on the traffic to be sent according to the stack member list generated in steps S8, S9, and S10, and sends the traffic according to the operation result, wherein the steps of generating the stack member list in steps S8, S9, and S10 are the same as steps S1, S2, and S3, and are not described herein again.
As can be seen from the above embodiments, the Hash operation is performed according to the obtained stack member list, so that the traffic can be reasonably sent.
Based on the same idea as the above embodiment, the present disclosure also provides a device for flow processing, which may be a mesh plate having an association relationship (having a neighbor relationship) with two or more stacked plates, the device comprising:
the acquisition module is used for acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
the calculation module is used for determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate, and determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight proportion;
a sending module, configured to send the traffic to be sent according to the stack member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
In another embodiment, the calculation module compares the total stacking port rate values of the stacking plates and rounds the ratio, and the rounded ratio is used as the rate weight ratio.
In another embodiment, the calculation module determines the list of stacking members according to whether the rate weight ratios are the same and according to the number of links between the mesh plate and each stacking plate.
The electronic device provided by the embodiments of the present disclosure has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, no mention is made in the apparatus embodiments, and reference may be made to the corresponding contents in the foregoing method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed method and electronic device can be implemented in other manners. The embodiments described above are merely illustrative, and the flowcharts and block diagrams in the figures, for example, illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of traffic processing, wherein the method is applied to a transport network, the transport network comprising a web panel and two or more stacked panels in association with the web panel, the method comprising:
acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate;
determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
sending the flow to be sent according to the stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
2. The method according to claim 1, wherein determining a rate weight ratio based on the ratio of the total stack port rates of the respective stack plates comprises:
and comparing the total stacking port speed values of the stacking plates, rounding the comparison values, and taking the rounded ratio as a speed weight ratio.
3. The method of claim 1, wherein determining a list of stack members based on the number of links between the otter board and each stacking board and the rate weight ratio comprises:
and determining a stacking member list according to whether the rate weight proportions are the same or not and according to the number of links between the network board and each stacking board.
4. The method according to claim 3, wherein when the rate weight ratios are the same, the determining a list of stack members according to the number of links between the otter board and each stacking board and the rate weight ratios comprises:
the stacking member list comprises actual interface members between the network board and each stacking board and virtual members between the network board and each stacking board, wherein the virtual members are virtual interface members which are virtualized for the stacking boards with the number of links between the network board and each stacking board being less than the reference number by taking the maximum number of links between the network board and each stacking board as the reference number, so that the member numbers corresponding to each stacking board in the stacking member list are the same.
5. The method according to claim 3, wherein when the rate weight ratios are different, the determining a list of stack members according to the number of links between the otter board and each stacking board and the rate weight ratios specifically comprises:
the stacking member list comprises actual interface members between the network board and each stacking board and virtual members between the network board and each stacking board, wherein the virtual members take the minimum number of links between the network board and each stacking board as a reference number, and the virtual interface members are virtualized out for each stacking board according to the reference number.
6. The method according to claim 5, wherein virtualizing a virtual interface member for each stack board according to the reference number specifically includes:
and respectively multiplying the total stacking port rate of each stacking plate by the reference number and then comparing to obtain a ratio result of the comparison, and virtualizing a virtual interface member according to the ratio result.
7. A Hash method of traffic, wherein the method is applied to a mesh board of an IRF network, the IRF network further includes a service board and two or more stacked boards associated with the mesh board, and the method includes:
acquiring traffic to be sent from the service board;
acquiring the number of links between the stacking plates and the total stacking port rate of each stacking plate;
determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate;
determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight ratio;
carrying out Hash sending on the flow to be sent according to the stacking member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
8. An apparatus for traffic processing, the apparatus being referenced to a gateway in a transport network, the transport network further comprising two or more stacked boards in an associated relationship with the gateway, the apparatus comprising:
the acquisition module is used for acquiring the number of links between the screen plate and each stacking plate and the total stacking port rate of each stacking plate;
the calculation module is used for determining a rate weight proportion according to the ratio of the total stacking port rate of each stacking plate, and determining a stacking member list according to the number of links between the screen plate and each stacking plate and the rate weight proportion;
a sending module, configured to send the traffic to be sent according to the stack member list;
wherein the total stacking port rate of each stacking plate is the sum of the rates of all stacking ports on each stacking plate.
9. The apparatus of claim 8,
and the calculation module compares the total stacking port speed values of the stacking plates and rounds the comparison values, and the rounded ratio is used as the speed weight ratio.
10. The apparatus of claim 8,
and the calculation module determines a stacking member list according to whether the rate weight proportion is the same or not and according to the number of links between the screen plate and each stacking plate.
CN202010132940.1A 2020-02-29 2020-02-29 Flow processing method, flow Hash method and device Active CN111478858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010132940.1A CN111478858B (en) 2020-02-29 2020-02-29 Flow processing method, flow Hash method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010132940.1A CN111478858B (en) 2020-02-29 2020-02-29 Flow processing method, flow Hash method and device

Publications (2)

Publication Number Publication Date
CN111478858A true CN111478858A (en) 2020-07-31
CN111478858B CN111478858B (en) 2022-05-27

Family

ID=71747112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010132940.1A Active CN111478858B (en) 2020-02-29 2020-02-29 Flow processing method, flow Hash method and device

Country Status (1)

Country Link
CN (1) CN111478858B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020057685A1 (en) * 2000-09-20 2002-05-16 Broadcom Corporation Method and apparatus for enabling L3 switching by a network switch in a stacking environment
US20070081463A1 (en) * 2005-10-11 2007-04-12 Subash Bohra System and Method for Negotiating Stack Link Speed in a Stackable Ethernet Switch System
CN101022456A (en) * 2007-03-22 2007-08-22 华为技术有限公司 Link junction method, port load equalizing method and apparatus thereof
CN103428103A (en) * 2013-08-08 2013-12-04 杭州华三通信技术有限公司 Link load control method and stacking device
US20140140210A1 (en) * 2012-11-20 2014-05-22 Inventec Corporation Network system and load balancing method
WO2017024824A1 (en) * 2015-08-11 2017-02-16 中兴通讯股份有限公司 Aggregated link-based traffic management method and device
CN107438029A (en) * 2016-05-27 2017-12-05 华为技术有限公司 The method and apparatus for forwarding data
CN108134744A (en) * 2017-12-06 2018-06-08 新华三技术有限公司 Load sharing method and veneer in a kind of IRF stackings
CN110138675A (en) * 2019-05-30 2019-08-16 新华三信息安全技术有限公司 A kind of data traffic retransmission method, device, web plate and distributed apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020057685A1 (en) * 2000-09-20 2002-05-16 Broadcom Corporation Method and apparatus for enabling L3 switching by a network switch in a stacking environment
US20070081463A1 (en) * 2005-10-11 2007-04-12 Subash Bohra System and Method for Negotiating Stack Link Speed in a Stackable Ethernet Switch System
CN101022456A (en) * 2007-03-22 2007-08-22 华为技术有限公司 Link junction method, port load equalizing method and apparatus thereof
US20140140210A1 (en) * 2012-11-20 2014-05-22 Inventec Corporation Network system and load balancing method
CN103428103A (en) * 2013-08-08 2013-12-04 杭州华三通信技术有限公司 Link load control method and stacking device
WO2017024824A1 (en) * 2015-08-11 2017-02-16 中兴通讯股份有限公司 Aggregated link-based traffic management method and device
CN107438029A (en) * 2016-05-27 2017-12-05 华为技术有限公司 The method and apparatus for forwarding data
CN108134744A (en) * 2017-12-06 2018-06-08 新华三技术有限公司 Load sharing method and veneer in a kind of IRF stackings
CN110138675A (en) * 2019-05-30 2019-08-16 新华三信息安全技术有限公司 A kind of data traffic retransmission method, device, web plate and distributed apparatus

Also Published As

Publication number Publication date
CN111478858B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
Ben Basat et al. Constant time updates in hierarchical heavy hitters
US11075986B2 (en) Weighted load balancing using scaled parallel hashing
Li et al. Let's stay together: Towards traffic aware virtual machine placement in data centers
US20200127922A1 (en) Hybrid network processing load distribution in computing systems
CN106101232A (en) Load-balancing method and device
JP2022528947A (en) Blockchain-based network security system and processing method
CN101217528A (en) A link aggregation method and device
CN110555079B (en) Data processing method, device, equipment and storage medium
CN110278161A (en) Message diversion method, apparatus and system based on User space protocol stack
CN104461727A (en) Memory module access method and device
Kulkarni et al. Scheduling opportunistic links in two-tiered reconfigurable datacenters
CN113238996A (en) Block chain data archiving method based on DHT, electronic equipment and storage medium
CN113490231B (en) Usability guarantee method and device for network slice
CN114024960B (en) Multiparty security computing method, device, system, storage medium and equipment
CN111478858B (en) Flow processing method, flow Hash method and device
Kiamari et al. Coding for edge-facilitated wireless distributed computing with heterogeneous users
Liu et al. Secure and balanced scheme for non-local data storage in blockchain network
Amir et al. Customizable fault tolerance forwide-area replication
CN101848523A (en) Path selecting method in multi-channel wireless mesh network and device thereof
CN112435123A (en) Transaction method, system and storage medium based on isomorphic multi-chain system
Qiu et al. Online security-aware and reliability-guaranteed ai service chains provisioning in edge intelligence cloud
CN116319303A (en) Network card virtualization method based on DPU cross-card link aggregation
CN111010348A (en) Link resource load balancing method, device, equipment and storage medium
Guo et al. User rank: A user influence-based data distribution optimization method for privacy protection in cloud storage system
Zhai et al. Security service function chain deployment using a viterbi-based algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant