CN113806083B - Method and device for processing aggregate flow data - Google Patents

Method and device for processing aggregate flow data Download PDF

Info

Publication number
CN113806083B
CN113806083B CN202111040279.2A CN202111040279A CN113806083B CN 113806083 B CN113806083 B CN 113806083B CN 202111040279 A CN202111040279 A CN 202111040279A CN 113806083 B CN113806083 B CN 113806083B
Authority
CN
China
Prior art keywords
load
cpu
target
bitmap
flow data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111040279.2A
Other languages
Chinese (zh)
Other versions
CN113806083A (en
Inventor
邢涛
王振
叶倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN202111040279.2A priority Critical patent/CN113806083B/en
Publication of CN113806083A publication Critical patent/CN113806083A/en
Application granted granted Critical
Publication of CN113806083B publication Critical patent/CN113806083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present specification provides a method and apparatus for processing aggregate flow data, the method comprising: determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm after the aggregate flow data is received; judging whether the load condition of the target CPU meets the load requirement or not; and if yes, sending the aggregate stream data to the target CPU for processing. By applying the scheme, one target CPU for processing the aggregate flow data can be determined in a plurality of CPUs based on a preset sharing algorithm, then the load condition of the target CPU is detected, and when the load condition of the target CPU meets the load requirement, the aggregate flow data is sent to the target CPU for processing, so that the problem that partial CPUs are excessively loaded due to different sizes of carried flow data can be effectively solved, and further CPU load balancing in the true sense is realized.

Description

Method and device for processing aggregate flow data
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method and an apparatus for processing aggregate flow data.
Background
With the development of network systems, various network attacks are endless, malicious consumption of limited resources of a network or occupation of the system further destroys the capability of the system to provide services externally, and can be identified by detecting network data streams aiming at the attacks. Current network data flow detection techniques may include Netflow, xflow, sflow, netsteam, etc., which may collect data flows having common tuple fields, then parse and aggregate the data flows to obtain aggregate flow data, and then detect the aggregate flow data to identify the malicious attacks.
For a single router, the five-tuple of the aggregate flow data is the same, that is, the source IP, destination IP, source port, destination port and protocol of a plurality of aggregate flow data are the same, while when the data is processed by the existing multi-core architecture device, the data is distributed according to the five-tuple or seven-tuple of the data, so that all the aggregate flow data is distributed to a single CPU (Central Processing Unit ) for processing, and the performance of the device is greatly reduced.
Disclosure of Invention
Aiming at the problems, the application provides a method and a device for processing aggregate flow data, and the specific technical scheme is as follows:
according to a first aspect of the present application, there is provided a method of aggregating streaming data, the method being applied to a multi-core architecture device, the multi-core architecture device including a plurality of CPUs thereon, the method comprising:
after the aggregate flow data is received, determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm;
judging whether the load condition of the target CPU meets the load requirement or not;
and if yes, sending the aggregate stream data to the target CPU for processing.
Optionally, in the method for processing aggregate flow data, determining, for the aggregate flow data, a target CPU among the plurality of CPUs based on a preset load sharing algorithm includes:
extracting a specified header field from received aggregate stream data, and operating the specified header field based on a preset parameter algorithm to obtain a first scheduling parameter of the aggregate stream data;
performing remainder operation on the first scheduling parameters based on the number of the CPUs to obtain a first operation result;
the target CPU is determined among the plurality of CPUs based on the first operation result.
Optionally, in the method for processing aggregate flow data, each CPU corresponds to a load bitmap, where the load bitmaps corresponding to each CPU include the same number of load bitmap units, and determining, based on a preset load sharing algorithm, a target CPU for the aggregate flow data in the multiple CPUs includes:
extracting a specified header field from received aggregate flow data, and operating the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate flow data;
performing remainder operation on the second scheduling parameters based on the number of the CPUs and the number of the load bitmap units to obtain a second operation result;
determining a CPU corresponding to the target load bitmap hit by the second operation result as a target CPU;
the judging whether the load condition of the target CPU meets the load requirement comprises the following steps:
and judging whether the load condition of the target CPU meets the load requirement or not based on the value of the target load bitmap unit in the target load bitmap hit by the second operation result.
Optionally, in the method for processing aggregate flow data, determining a target CPU based on a CPU load bitmap, the method for updating the CPU load bitmap includes:
acquiring the utilization rate of a corresponding CPU (Central processing Unit) based on a preset time interval for each load bitmap;
and determining the value of each load bitmap unit in the load bitmap based on the utilization rate so as to update the load bitmap.
Optionally, in the method for processing aggregate flow data, the method further includes:
if the load condition of the target CPU does not meet the load requirement, determining a CPU with the lowest utilization rate from the plurality of CPUs, and sending the aggregate flow data to the CPU with the lowest utilization rate for processing;
alternatively, a different load sharing algorithm is employed to re-determine the target CPU for the aggregate flow data among the plurality of CPUs.
According to a second aspect of the present application, there is provided an apparatus for processing aggregate flow data, the apparatus being applied to a multi-core architecture device, the multi-core architecture device including a plurality of CPUs thereon, including:
the target determining module is used for determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm after the aggregate flow data is received;
the load judging module is used for judging whether the load condition of the target CPU meets the load requirement or not;
and the data sending module is used for sending the aggregate flow data to the target CPU for processing when judging that the load condition of the target CPU meets the load requirement.
Optionally, in the apparatus for processing aggregate flow data, the target determining module includes:
the first parameter calculation unit is used for extracting a specified header field from the received aggregate stream data, and calculating the specified header field based on a parameter algorithm to obtain a first scheduling parameter of the aggregate stream data;
the first remainder operation unit is used for performing remainder operation on the first scheduling parameters in the first parameter calculation unit based on the number of the CPUs to obtain a first operation result;
and a first CPU determining unit configured to determine the target CPU among the plurality of CPUs based on a first operation result in the first remainder operation unit.
Optionally, in the device for processing aggregate flow data, each CPU corresponds to a load bitmap, where the load bitmaps corresponding to each CPU include the same number of load bitmap units, and the target determining module includes:
the second parameter calculation unit is used for extracting a specified header field from the received aggregate flow data, and calculating the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate flow data;
the second remainder operation unit is used for performing remainder operation on the second scheduling parameters in the second parameter calculation unit based on the number of the CPUs and the number of the load bitmap units to obtain a second operation result;
a second determining CPU unit, configured to determine, as a target CPU, a CPU corresponding to a target load bitmap hit by a second operation result in the second remainder operation unit;
and the load judging module is used for judging whether the load condition of the target CPU meets the load requirement or not based on the value of the target load bitmap unit in the target load bitmap hit by the second operation result.
Optionally, in the device for processing aggregate flow data, the device further includes a CPU load bitmap update module, where, for each load bitmap, the CPU load bitmap module is configured to obtain a usage rate of a corresponding CPU based on a preset time interval;
and determining the value of each load bitmap unit in the load bitmap based on the utilization rate so as to update the load bitmap.
Optionally, the device for processing aggregate flow data further includes a secondary sending module, where the secondary sending module is configured to determine, from among the multiple CPUs, a CPU with a lowest usage rate if a load condition of the target CPU does not meet a load requirement, and send the aggregate flow data to the CPU with the lowest usage rate for processing;
or, invoking the target determination module to determine the target CPU for the aggregate flow data in the plurality of CPUs again by adopting a different load sharing algorithm.
According to the technical scheme, after the multi-core architecture equipment receives the aggregate flow data, one target CPU used for processing the aggregate flow data can be determined in the plurality of CPUs based on the preset sharing algorithm, then the load condition of the target CPU is detected, when the load condition of the target CPU meets the load requirement, the aggregate flow data is sent to the target CPU for processing, the problem that partial CPUs are excessively loaded due to different carried flow data can be effectively solved, and therefore CPU load balancing in the real sense is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application. Moreover, not all of the above-described effects may be required to be achieved by any one of the embodiments of the present application.
Drawings
FIG. 1 is a flow chart illustrating a method of processing aggregate stream data;
FIG. 2 is a flow chart illustrating a method of determining a target CPU;
FIG. 3 is a schematic diagram of a CPU load bitmap shown in the present application;
FIG. 4 is a schematic diagram of the values of the bitmap units of the CPU load bitmap when the CPU utilization is 15%;
FIG. 5 is a flow chart illustrating a method of determining a target CPU and determining a target CPU load condition;
FIG. 6 is a schematic diagram of the values of the bitmap units of the CPU load bitmap when the CPU utilization is 90%;
FIG. 7 is a hardware block diagram of a multi-core architecture device in which the apparatus for processing aggregate flow data of the present application is located;
FIG. 8 is a block diagram of an apparatus for processing aggregate stream data as shown herein;
FIG. 9 is a block diagram of a unit of a targeting module shown in the present application;
fig. 10 is a block diagram of elements of another targeting module shown in the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
To solve the problem of the reduced performance of the CPU, an embodiment of the present invention provides a method for processing aggregate flow data to implement an balanced load of aggregate flow data processing between a plurality of CPUs, please refer to fig. 1, and fig. 1 is a flowchart of a method for processing aggregate flow data shown in the present application. The method for processing the aggregate flow data can be applied to multi-core architecture equipment, and a plurality of CPUs are configured on the multi-core architecture equipment The method comprises the following steps:
s102: after the aggregate flow data is received, a target CPU is determined for the aggregate flow data among the plurality of CPUs based on a preset load sharing algorithm.
The aggregated stream data is data obtained by analyzing and aggregating the collected data streams, and the malicious attack can be identified by detecting the aggregated stream data
After the aggregate flow data is received, determining a target CPU for the aggregate flow data in a plurality of CPUs based on a preset load sharing algorithm, wherein the load sharing algorithm comprises a polling method and a hash method, and selecting one of the CPUs as the target CPU for processing the aggregate flow data by using the load sharing algorithm to ensure the load balance of the CPUs.
For example, the order of the multiple CPUs in the multi-core architecture device may be preset, and a polling manner is adopted to select one CPU as a target CPU for processing the aggregate flow data;
or extracting a head field designated by the data after the aggregate flow data is received, calculating a first scheduling parameter for the designated head field based on the parameter algorithm such as a hash method, and determining the target CPU based on a mode of taking a remainder for the first scheduling parameter.
S104: and judging whether the load condition of the target CPU meets the load requirement or not.
Because the stream data carried in different aggregate stream data are different in size, the processing speed is different, and when a plurality of big data are divided into the same CPU, the problems of overhigh load and loss performance of the CPU occur. In order to avoid the problems, before the aggregate flow data is sent to the target CPU, whether the load condition of the target CPU meets the load requirement is judged.
The load condition may include a usage rate of the target CPU, where the load requirement includes that the usage rate of the target CPU is less than or equal to a preset usage rate threshold, that is, after the target CPU is determined, a current usage rate of the target CPU is obtained, if the current usage rate is less than or equal to the preset usage rate threshold, the load condition of the target CPU is considered to be in accordance with the load requirement, and if the current usage rate is greater than the preset usage rate threshold, the load condition of the target CPU is considered to be not in accordance with the load requirement;
or after determining the target CPU, obtaining the load bitmap of the target CPU, and judging whether the load condition of the target CPU meets the load requirement or not based on the value of each load bitmap unit on the load bitmap of the CPU.
S106: and if yes, sending the aggregate stream data to the target CPU.
And under the condition that the load condition of the target CPU is judged to meet the load requirement, the current load condition of the target CPU is good, and the current target CPU can be selected to process the aggregate flow data.
And under the condition that the load condition of the target CPU is judged to be not in accordance with the load requirement, the current load of the target CPU is too high, the target CPU can be determined for the aggregate flow data in a plurality of CPUs by utilizing a load sharing algorithm, or the CPU with the lowest use rate is determined as the target CPU in the plurality of CPUs, and the aggregate flow data is sent to the CPU with the lowest use rate for processing.
After receiving the aggregate flow data, the multi-core architecture device can firstly determine a target CPU which is used for processing the aggregate flow data in a plurality of CPUs based on a preset sharing algorithm, then detect the load condition of the target CPU, and when the load condition of the target CPU meets the load requirement, send the aggregate flow data to the target CPU for processing, so that the problem that partial CPUs are overloaded due to different carried flow data can be effectively solved, and further, the CPU load balance in the real sense is realized.
Implementations of the present application are described below in connection with specific embodiments.
In the foregoing step S102, after receiving the aggregate flow data, a target CPU is determined for the aggregate flow data among the multiple CPUs based on a preset load sharing algorithm, and the implementation thereof is as shown in fig. 2:
step 1022, extracting a specified header field from the received aggregate stream data, and operating on the specified header field based on a preset parameter algorithm to obtain a first scheduling parameter of the aggregate stream data.
Header fields of the aggregate flow data are generally divided into fixed fields and non-fixed fields, such as data version, communication classification and the like are fixed fields, time stamps, log sequence numbers and the like are non-fixed fields, and the non-fixed fields can be selected for operation of the first scheduling parameters in this step.
Taking Netflow data as an example, since sysUptime and Timestamp in header fields all change with time, one field can be selected from the fields, and operation is performed together with a FlowSequence field according to data change, the algorithm can be a hash algorithm, or an arbitrary result is a positive integer algorithm, such as karatsuba multiplication, so as to obtain a first scheduling parameter, and then a target CPU is selected for the aggregate flow data based on the first scheduling parameter.
Step 1024, performing remainder operation on the first scheduling parameter based on the number of CPUs, to obtain a first operation result.
Step 1026, determining the target CPU among the plurality of CPUs based on the first operation result.
In this embodiment, before selecting the target CPU, the CPU in the multi-core architecture device may be numbered. For example, the plurality of CPUs are sequentially numbered starting from 0. It is assumed that there are 32 CPUs in the multi-core architecture device, and the 32 CPUs are numbered sequentially from 0, and the numbers of the CPUs are 0, 1, 2, …, and 31 in this order. After numbering the plurality of CPUs, performing a remainder operation on the first scheduling parameter, wherein the divisor is the number of CPUs, the remainder is used as a first operation result, and the CPU with the same number as the remainder is used as a target CPU.
For example, if the number of CPUs is 32, the remainder obtained after the remainder operation is performed on the first scheduling parameter ranges from 0 to 31, and if the remainder obtained is 17, the CPU with the number 17 is used as the target CPU processing data.
From the above description, it can be seen that by numbering the CPUs, the target CPU can be quickly determined among the plurality of CPUs using the load sharing algorithm.
In another embodiment of the present application, after acquiring the aggregate flow data, the multi-core architecture device may obtain the load condition of the target CPU and the target CPU through one operation.
In this example, each CPU may be provided with a corresponding load bitmap, which is used to indicate the usage rate of the corresponding CPU, where the number of load bitmap units included in each load bitmap is the same, and the number may be preset. Referring to fig. 3, fig. 3 shows a load bitmap comprising N bitmap cells.
The load bitmap unit of the load bitmap corresponding to the CPU may be determined based on the usage rate of the CPU, and an update time interval may be preset on the multi-core architecture device, for example, the preset update time interval is 1 second. The current utilization rate of each CPU can be obtained at intervals of updating time, and the value of each load bitmap unit in the corresponding load bitmap is determined based on the current utilization rate.
Since the CPU usage is 100% at the highest, it is simpler to set the number of load bitmap units to 10, and each load bitmap unit may represent 10% of CPU usage. For example, starting from the left, a first load bitmap unit may represent a CPU usage of 0% -10%, a second load bitmap unit represents a CPU usage of 10% -20%, and so on. If the usage rate of the CPU covers the usage rate of the corresponding load bitmap unit, that is, the usage rate of the CPU reaches the usage rate upper limit value represented by the corresponding load bitmap unit, the value of the load bitmap unit covered by the usage rate of the CPU may be set to a first parameter, the first parameter may be 0, and the value of the load bitmap unit uncovered by the usage rate of the CPU may also be set to a second parameter, where the second parameter may be 1.
For example, referring to fig. 4, when the usage rate of the CPU is 15%, the usage rate of the CPU is 15% and completely covers the usage rate of the first load bitmap unit, so the value of the first load bitmap unit is set to 0, and the second load bitmap unit and the rest load bitmap units are not completely covered, so the values from the second load bitmap unit to the last load bitmap unit are all set to 1.
In this example, after receiving the aggregate flow data, the multi-core architecture device may determine, through one operation, the load situation of the target CPU and the target CPU based on the CPU load bitmap, where the steps are as shown in fig. 5:
step 502, extracting a specified header field from the received aggregate stream data, and operating the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate stream data.
This step is similar to step 1022, in which the parameter algorithm used to calculate the scheduling parameter may be the same as or different from the parameter algorithm in step 1022, and for convenience of distinction, the scheduling parameter calculated in this step may be referred to as the second scheduling parameter.
And step 504, performing remainder operation on the second scheduling parameters based on the number of the CPUs and the number of the load bitmap units to obtain a second operation result.
The divisor of the remainder operation of the second scheduling parameter is the product of the number of the CPUs and the number of the CPU load bitmap units.
And step 506, determining the CPU corresponding to the target load bitmap hit by the second operation result as a target CPU.
Optionally, each CPU is numbered sequentially from 0, each load bitmap includes N load bitmap units, the load bitmap units corresponding to each CPU are numbered sequentially according to the numbering sequence of the CPUs, for example, the load bitmap units of the load bitmap corresponding to the CPU with the number 0 are numbered from 0 to N-1, the load bitmap units of the load bitmap corresponding to the CPU with the number 1 are numbered from N to 2N-1, and so on, each load bitmap unit corresponds to a unique number as shown in the following table 1:
TABLE 1
If the number of the CPUs is M, performing remainder operation on the second scheduling parameter, wherein the divisor is M multiplied by N, the remainder ranges from 0 to M multiplied by N-1, determining the CPU corresponding to the load bitmap unit with the same number as the remainder, and taking the CPU as a target CPU.
For example, if there are 32 CPUs, the load bitmap of each CPU includes 5 load bitmap units, and each load bitmap unit is numbered as shown in table 2 below:
TABLE 2
And performing remainder operation on the second scheduling parameter, wherein the divisor is 160, if the remainder obtained by operation is 8, determining that the load bitmap unit with the number of 8 corresponds to the CPU with the number of 1, and taking the CPU with the number of 1 as a target CPU.
Optionally, because the second operation result is that decimal numbers are adopted to perform operation, the number of load bitmap units can be set to 10, the remainder operation is performed on the second scheduling parameter, the obtained remainder units are positioned to the load bitmap units, and the non-units are positioned to the target CPU.
For example, if there are 32 CPUs, the load bitmap of each CPU includes 10 load bitmap units, the remainder operation is performed on the second scheduling parameter, the divisor is 320, if the remainder obtained by the operation is 17, the 8 th load bitmap unit from left to right on the load bitmap of the target CPU corresponds to the number 7 of the remainder, the CPU with the number 1 corresponds to the non-number 1, and the CPU with the number 1 is the target CPU.
And step 508, judging whether the load condition of the target CPU meets the load requirement or not based on the value of the target load bitmap unit in the target load bitmap hit by the second operation result.
More simply, it is determined whether the load condition of the target CPU meets the load requirement based on the value of the load bitmap unit determined by the second operation result in step 506. The value of the load bitmap unit comprises a first parameter and a second parameter, wherein the first parameter represents that the utilization rate of the CPU completely covers the load bitmap unit, if the value of the load bitmap unit corresponding to the second operation result is the first parameter, the load condition of the target CPU does not meet the load requirement, the target CPU cannot be scheduled, the target CPU needs to be reselected, the second parameter represents that the utilization rate of the CPU does not completely cover the load bitmap unit, and if the value of the load bitmap unit corresponding to the second operation result is the second parameter, the load condition of the target CPU meets the load requirement, and the target CPU can be scheduled.
For example, the first parameter is set to 0, the second parameter is set to 1, if the second operation result hits the 7 th load bitmap unit in the load bitmap shown in fig. 4, the value is 1, which indicates that the load condition of the target CPU corresponding to the load bitmap meets the load requirement, and the target CPU may be scheduled, and the aggregate flow data is sent to the target CPU for processing; if the second operation result hits the 7 th load bitmap unit in the load bitmap shown in fig. 6, the value is 0, which indicates that the CPU load condition corresponding to the load bitmap does not meet the load requirement and cannot be scheduled.
When the load condition of the target CPU is judged to be not in accordance with the load requirement, the method for re-determining the target CPU can sort the CPUs according to the utilization rates of the CPUs, determine the CPU with the lowest utilization rate as the target CPU, send the aggregate flow data to the target CPU with the lowest utilization rate for processing, or re-determine the target CPU in the CPUs by utilizing a load sharing algorithm, and continuously judge whether the load condition of the target CPU is in accordance with the load requirement.
By the method, the corresponding load bitmap is set for each CPU, and the value of the load bitmap unit is updated according to the CPU utilization rate. After the aggregate flow data is acquired, a load bitmap unit can be positioned based on one operation, then, on one hand, a target CPU can be determined according to the corresponding relation between the CPU load bitmap where the load bitmap unit is positioned and the CPU, and on the other hand, whether the load condition of the target CPU meets the load requirement can be determined according to the value of the load bitmap unit, all the CPUs are ordered according to the CPU utilization rate without receiving one data, and the efficiency of determining the aggregate flow data by the target CPU is improved.
Corresponding to the foregoing embodiments of the method for processing aggregate flow data, the present specification also provides embodiments of an apparatus for processing aggregate flow data and a terminal to which the apparatus is applied.
The embodiment of the application for processing the aggregate flow data device can be applied to multi-core architecture equipment. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor where the processor is located and processes aggregate flow data. In terms of hardware level, as shown in fig. 7, a hardware structure diagram of a multi-core architecture device where a device for processing aggregate flow data in the present application is shown, except for a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 7, in general, the multi-core architecture device where the device is located in an embodiment may further include other hardware according to an actual function of the multi-core architecture device, which is not described herein again.
Referring to fig. 8, fig. 8 is a block diagram of an apparatus for processing aggregate flow data, the apparatus including:
a target determining module 802, configured to determine, after receiving the aggregate flow data, a target CPU among the plurality of CPUs based on a preset load sharing algorithm;
the load judging module 804 is configured to judge whether the load condition of the target CPU meets a load requirement;
and the data sending module 806 is configured to send the aggregate flow data to the target CPU for processing when the load condition of the target CPU is determined to meet the load requirement.
In this embodiment, after the multi-core architecture device receives the aggregate flow data, the target determining module 802 may determine a target CPU among multiple CPUs, where each unit in the target determining module 802 is shown in fig. 9, and fig. 9 is a block diagram of a unit of the target determining module shown in this application, including:
a first parameter calculation unit 802a1, configured to extract a specified header field from received aggregate stream data, and calculate the specified header field based on a preset parameter algorithm to obtain a first scheduling parameter of the aggregate stream data;
a first remainder operation unit 802b1, configured to perform a remainder operation on the first scheduling parameter in the first parameter calculation unit based on the number of CPUs, to obtain a first operation result;
a first CPU determining unit 802c1 for determining the target CPU among the plurality of CPUs based on the first operation result in the first remainder operation unit.
Optionally, each CPU corresponds to a load bitmap, where the load bitmap corresponding to each CPU includes the same number of load bitmap units, and as shown in fig. 10, each unit in the target determining module 802 may also be shown in fig. 10, where fig. 10 is a block diagram of a unit of another target determining module, and the target determining module 802 includes:
a second parameter calculation unit 802a2, configured to extract a specified header field from the received aggregate stream data, and calculate the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate stream data;
a second remainder operation unit 802b2, configured to perform a remainder operation on a second scheduling parameter in the second parameter calculation unit based on the number of CPUs and the number of load bitmap units, to obtain a second operation result;
a second determining CPU unit 802c2, configured to determine, as a target CPU, a CPU corresponding to a target load bitmap hit by a second operation result in the second remainder operation unit;
in this embodiment, the device for processing aggregate flow data further includes a CPU load bitmap update module, where the CPU load bitmap module is configured to obtain, for each load bitmap, a usage rate of a corresponding CPU based on a preset time interval;
and determining the value of each load bitmap unit in the load bitmap based on the utilization rate so as to update the load bitmap.
In this embodiment, the device for processing aggregate flow data further includes a secondary sending module, where the secondary sending module is configured to determine, from among the multiple CPUs, a CPU with a lowest usage rate if a load condition of the target CPU does not meet a load requirement, and send the aggregate flow data to the CPU with the lowest usage rate for processing;
or, invoking the target determination module to determine the target CPU for the aggregate flow data in the plurality of CPUs again by adopting a different load sharing algorithm.
The implementation process of the functions and roles of each module and unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be repeated here.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features of specific embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. On the other hand, the various features described in the individual embodiments may also be implemented separately in the various embodiments or in any suitable subcombination. Furthermore, although features may be acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Furthermore, the processes depicted in the accompanying drawings are not necessarily required to be in the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1. A method of processing aggregate flow data, the method being applied to a multi-core architecture device, the multi-core architecture device including a plurality of CPUs thereon, the method comprising:
after the aggregate flow data is received, determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm;
judging whether the load condition of the target CPU meets the load requirement or not;
if yes, the aggregate flow data is sent to the target CPU for processing;
each CPU corresponds to a load bitmap, wherein the load bitmap corresponding to each CPU comprises the same number of load bitmap units, the determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm comprises the following steps:
extracting a specified header field from received aggregate flow data, and operating the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate flow data;
performing remainder operation on the second scheduling parameters based on the number of the CPUs and the number of the load bitmap units to obtain a second operation result;
determining a CPU corresponding to the target load bitmap hit by the second operation result as a target CPU;
the judging whether the load condition of the target CPU meets the load requirement comprises the following steps:
judging whether the load condition of the target CPU meets the load requirement or not based on the value of a target load bitmap unit in the target load bitmap hit by the second operation result;
the CPU load bitmap updating method comprises the following steps:
acquiring the utilization rate of a corresponding CPU (Central processing Unit) based on a preset time interval for each load bitmap;
and determining the value of each load bitmap unit in the load bitmap based on the utilization rate so as to update the load bitmap.
2. The method of claim 1, wherein the determining a target CPU among the plurality of CPUs for aggregate flow data based on a preset load sharing algorithm comprises:
extracting a specified header field from received aggregate stream data, and operating the specified header field based on a preset parameter algorithm to obtain a first scheduling parameter of the aggregate stream data;
performing remainder operation on the first scheduling parameters based on the number of the CPUs to obtain a first operation result;
the target CPU is determined among the plurality of CPUs based on the first operation result.
3. The method according to claim 1, wherein the method further comprises:
if the load condition of the target CPU does not meet the load requirement, determining a CPU with the lowest utilization rate from the plurality of CPUs, and sending the aggregate flow data to the CPU with the lowest utilization rate for processing;
alternatively, a different load sharing algorithm is employed to re-determine the target CPU for the aggregate flow data among the plurality of CPUs.
4. An apparatus for processing aggregate flow data, the apparatus being applied to a multi-core architecture device, the multi-core architecture device including a plurality of CPUs thereon, the apparatus comprising:
the target determining module is used for determining a target CPU for the aggregate flow data in the plurality of CPUs based on a preset load sharing algorithm after the aggregate flow data is received;
the load judging module is used for judging whether the load condition of the target CPU meets the load requirement or not;
the data sending module is used for sending the aggregate flow data to the target CPU for processing when judging that the load condition of the target CPU meets the load requirement;
each CPU corresponds to a load bitmap, wherein the load bitmaps corresponding to each CPU comprise the same number of load bitmap units, and the target determining module comprises:
the second parameter calculation unit is used for extracting a specified header field from the received aggregate flow data, and calculating the specified header field based on a preset parameter algorithm to obtain a second scheduling parameter of the aggregate flow data;
the second remainder operation unit is used for performing remainder operation on the second scheduling parameters in the second parameter calculation unit based on the number of the CPUs and the number of the load bitmap units to obtain a second operation result;
a second determining CPU unit, configured to determine, as a target CPU, a CPU corresponding to a target load bitmap hit by a second operation result in the second remainder operation unit;
the load judging module is used for judging whether the load condition of the target CPU meets the load requirement or not based on the value of the target load bitmap unit in the target load bitmap hit by the second operation result;
the CPU load bitmap updating module is used for acquiring the utilization rate of the corresponding CPU based on a preset time interval for each load bitmap; and determining the value of each load bitmap unit in the load bitmap based on the utilization rate so as to update the load bitmap.
5. The apparatus of claim 4, wherein the targeting module comprises:
the first parameter calculation unit is used for extracting a specified header field from the received aggregate stream data, and calculating the specified header field based on a preset parameter algorithm to obtain a first scheduling parameter of the aggregate stream data;
the first remainder operation unit is used for performing remainder operation on the first scheduling parameters in the first parameter calculation unit based on the number of the CPUs to obtain a first operation result;
and a first CPU determining unit configured to determine the target CPU among the plurality of CPUs based on a first operation result in the first remainder operation unit.
6. The apparatus according to claim 4, further comprising a secondary sending module, configured to determine a CPU with a lowest usage rate from among the plurality of CPUs if the load condition of the target CPU in the load judging module does not meet the load requirement, and send the aggregate flow data to the CPU with the lowest usage rate for processing;
or, invoking the target determination module to determine the target CPU for the aggregate flow data in the plurality of CPUs again by adopting a different load sharing algorithm.
CN202111040279.2A 2021-09-06 2021-09-06 Method and device for processing aggregate flow data Active CN113806083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040279.2A CN113806083B (en) 2021-09-06 2021-09-06 Method and device for processing aggregate flow data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040279.2A CN113806083B (en) 2021-09-06 2021-09-06 Method and device for processing aggregate flow data

Publications (2)

Publication Number Publication Date
CN113806083A CN113806083A (en) 2021-12-17
CN113806083B true CN113806083B (en) 2023-07-25

Family

ID=78940497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040279.2A Active CN113806083B (en) 2021-09-06 2021-09-06 Method and device for processing aggregate flow data

Country Status (1)

Country Link
CN (1) CN113806083B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022417A (en) * 2007-03-16 2007-08-22 华为技术有限公司 Method for selecting load sharing link and router
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
WO2012106894A1 (en) * 2011-07-18 2012-08-16 华为技术有限公司 Method and device for transmitting media stream data in cloud computing system
CN104901898A (en) * 2015-06-08 2015-09-09 东软集团股份有限公司 Load balancing method and device
CN105207946A (en) * 2015-08-27 2015-12-30 国家计算机网络与信息安全管理中心 Load balancing and preparsing method of network data packet
CN108055203A (en) * 2017-12-26 2018-05-18 杭州迪普科技股份有限公司 A kind of equivalent route load sharing method and device
CN108170533A (en) * 2017-12-27 2018-06-15 杭州迪普科技股份有限公司 The processing method and processing device of message, computer readable storage medium
CN109033008A (en) * 2018-07-24 2018-12-18 山东大学 A kind of the Hash computing architecture and its method, Key-Value storage system of dynamic reconfigurable
CN111800348A (en) * 2019-04-09 2020-10-20 中兴通讯股份有限公司 Load balancing method and device
CN112000296A (en) * 2020-08-28 2020-11-27 北京计算机技术及应用研究所 Performance optimization system in full flash memory array

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2938033A1 (en) * 2014-03-19 2015-09-24 Nec Corporation Reception packet distribution method, queue selector, packet processing device, and recording medium
US20170318082A1 (en) * 2016-04-29 2017-11-02 Qualcomm Incorporated Method and system for providing efficient receive network traffic distribution that balances the load in multi-core processor systems

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022417A (en) * 2007-03-16 2007-08-22 华为技术有限公司 Method for selecting load sharing link and router
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
WO2012106894A1 (en) * 2011-07-18 2012-08-16 华为技术有限公司 Method and device for transmitting media stream data in cloud computing system
CN104901898A (en) * 2015-06-08 2015-09-09 东软集团股份有限公司 Load balancing method and device
CN105207946A (en) * 2015-08-27 2015-12-30 国家计算机网络与信息安全管理中心 Load balancing and preparsing method of network data packet
CN108055203A (en) * 2017-12-26 2018-05-18 杭州迪普科技股份有限公司 A kind of equivalent route load sharing method and device
CN108170533A (en) * 2017-12-27 2018-06-15 杭州迪普科技股份有限公司 The processing method and processing device of message, computer readable storage medium
CN109033008A (en) * 2018-07-24 2018-12-18 山东大学 A kind of the Hash computing architecture and its method, Key-Value storage system of dynamic reconfigurable
CN111800348A (en) * 2019-04-09 2020-10-20 中兴通讯股份有限公司 Load balancing method and device
CN112000296A (en) * 2020-08-28 2020-11-27 北京计算机技术及应用研究所 Performance optimization system in full flash memory array

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Revisiting the Resign of Data Stream Processing Systems on Multi-Core Processors;S.Zhang等;《2017 IEEE 3rd International Conference on Data Engineering(ICDE)》;659-670 *
XMPP协议的数据分发网络的负载均衡算法;张哲宇等;《北京邮电大学学报》;第39卷(第S1期);27-31 *
分布式流计算框架中负载管理功能的设计与实现;彭书凯等;《中国优秀硕士学位论文全文数据库 信息科技辑》(第11期);I139-12 *
基于mean-variance的服务集群负载均衡方法;包晓安等;《电信科学》;第33卷(第01期);1-8 *

Also Published As

Publication number Publication date
CN113806083A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN108965347B (en) Distributed denial of service attack detection method, device and server
CN111935170B (en) Network abnormal flow detection method, device and equipment
US8510830B2 (en) Method and apparatus for efficient netflow data analysis
CN108282497B (en) DDoS attack detection method for SDN control plane
CN107968791B (en) Attack message detection method and device
US10523692B2 (en) Load balancing method and apparatus in intrusion detection system
Da Silva et al. Identification and selection of flow features for accurate traffic classification in SDN
CN1953392B (en) Detection method for abnormal traffic and packet relay apparatus
US8345575B2 (en) Traffic analysis apparatus and analysis method
US8005012B1 (en) Traffic analysis of data flows
CN104468507B (en) Based on the Trojan detecting method without control terminal flow analysis
US8619565B1 (en) Integrated circuit for network delay and jitter testing
US9160639B2 (en) Network flow abnormality detection system and a method of the same
CA2623315A1 (en) Communication link interception using link fingerprint analysis
CN113806083B (en) Method and device for processing aggregate flow data
US8614965B2 (en) Packet loss frequency measuring system, packet loss frequency measuring method, and program
CN108063814A (en) A kind of load-balancing method and device
JP2016146580A (en) Communication monitoring system, communication monitoring method, and program
CN107508764B (en) Network data traffic type identification method and device
CN111385667A (en) Video data processing method, device and computer readable storage medium
CN106817268B (en) DDOS attack detection method and system
US10511529B2 (en) Packet processing method for virtual switch
CN110162969B (en) Flow analysis method and device
KR102285661B1 (en) Appatus and method of load balancing in intrusion dectection system
CN113691607A (en) Flow load balance control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant