CN102404211A - Method and device for realizing load balancing of processors under AMP framework - Google Patents

Method and device for realizing load balancing of processors under AMP framework Download PDF

Info

Publication number
CN102404211A
CN102404211A CN2011103622328A CN201110362232A CN102404211A CN 102404211 A CN102404211 A CN 102404211A CN 2011103622328 A CN2011103622328 A CN 2011103622328A CN 201110362232 A CN201110362232 A CN 201110362232A CN 102404211 A CN102404211 A CN 102404211A
Authority
CN
China
Prior art keywords
processor
measurement processor
message
annular working
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103622328A
Other languages
Chinese (zh)
Inventor
刘彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN2011103622328A priority Critical patent/CN102404211A/en
Publication of CN102404211A publication Critical patent/CN102404211A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for realizing the load balancing of processors under an AMP framework, wherein the method comprises the following steps of: establishing an annular working queue for each detection processor; after a network manager receives a data message, looking up the detection processor corresponding to the message, if the annular working queue of the detection processor is not full, then submitting the message to the detection processor to detect; if the annular working queue of the detection processor is full, then submitting the message to a detection processor having an annular working queue which is not full to detect; and if the annular working queue of each detection processor is empty, then submitting the work of sending messages processed by a network processor to the detection processors to process. The efficient balancing of the loads among the detection processors, and between the network processor and the detection processors is realized by the technical proposal disclosed by the invention, thereby being better for exerting the performance of each processor.

Description

Balanced implementation method and the device of processor load under a kind of AMP framework
Technical field
The invention belongs to communication technical field; Particularly relate in a kind of multi-core parallel concurrent computing environment; AMP (Asymmetric MultiProcessing; Asymmetric multiprocessing) implementation method and the device of CPU (Central Processing Unit, central processing unit is hereinafter to be referred as processor) load balancing under the framework.
Background technology
Recent years; IPS (Intrusion Prevention System, intrusion prevention system) product becomes the new focus in safety product market, has not only kept annual growth rate of market more than 100%; And application constantly enlarges, and application technology is also progressively popularized.With traditional I DS (Intrusion Detection Systems; Intruding detection system) bypass inserts different; The IPS product adopts the mode that works online, and promptly the data that receive is detected, and transmits according to its purpose then; This and security gateway series products such as fire compartment wall, VPN (Virtual Private Network, Virtual Private Network) etc. are closely similar.This working method has determined the IPS product except detectability accurately will be arranged, and also the performance requirement that adapts with application network will be arranged.
In fact since the IPS product is born, adopted mature technologies such as agreement identification and attack signature pattern matching, what perplex its range of application mainly is performance requirement always.Present fire compartment wall reaches gigabit wire speed, 4G even 10G transfer capability and has belonged to usually, but IPS will realize that this performance is by no means easy.In IPS, not only need check the head of data message; Also will be to the content of concrete application protocol inspection data message; This data message that just makes that five-tuple is identical in IPS can not " quicken to handle "; That is to say in the whole road of IPS deal with data message does not have " shortcut ", and IPS need detect self each message of flowing through one by one.IPS becomes the person that mainly do not expend of cpu resource like this, and its performance depends on the disposal ability of hardware processor to a great extent.
Developing into of polycaryon processor utilizes parallel processing technique improving IPS properties of product that wide space is provided in recent years; Because it all is effectively that the lifting of processor computing capability detects complete trails to IPS, so number of cores is directly proportional with performance boost theoretically.But theory is not equal to practice, and the lifting of actual performance depends primarily on IPS to the balanced utilization of each processor, promptly brings into play the max calculation ability of each processor.
Two kinds of processor framework of work are generally arranged in the multi-core parallel concurrent computing environment, and a kind of is SMP (Symmetrical MultiProcessing, symmetrical multiprocessing) mode; Be also referred to as the isomorphism mode; The SMP mode is as the term suggests treat a plurality of kernel equalitys exactly, and the work that each kernel is undertaken is all identical, and all moves a cover IPS system on each kernel; Like this from Data Receiving, connect to set up, Data Detection is sent to data all is concurrent execution, be equivalent to a plurality of IPS system and move at the same time.This framework is more succinct, each processor cores load balancing, but because all kernels are all undertaken identical work; Certainly will produce a large amount of contentions to shared resource (internal storage data, filec descriptor, I/O equipment etc.); For handling these concurrent and synchronous a large amount of lock mechanisms that use, seriously restricted the performance performance again, more serious is increasing along with number of cores; Concurrent and synchronous consumption reaches certain magnitude, and performance not only can not increase on the contrary and can descend to some extent.
Another kind is the AMP mode, is also referred to as the isomery mode.The AMP mode exactly with a plurality of inner core region although treat; Can move different operating systems and also can in the identical operations system, move various tasks, each processor cores acts in accordance with the division of their functions and duties according to task division; Evade the competition of shared resource, thus the improving IPS The comprehensive performance.Complete operating system is often huger, and consumes resources is more, and efficient is also lower.Take out several physics kernels; Set up a kind of easy system environments (sometimes directly being called " bare nucleus " environment) above that; Operation single task role (such as transceive data, pattern matching etc.) often can obtain high performance in this " clean space "; This is the characteristics of AMP mode, also is its advantage.Though AMP framework more complicated is very effective because of its performance boost, is widely used at present.
The difficult point of AMP framework need to be the task sharing of careful each kernel of balance, otherwise can cause the kernel load unbalanced, influences the performance performance.The method that generally adopts at present is that processor cores is divided into two types, and one type is called network processing unit, is used for the reception and the transmission of network data message, and the another kind of measurement processor that is called is used to carry out IPS and detects.After network processing unit receives the network data message; According to its five-tuple connect (data flow); To connect with the hash algorithm then and navigate to fifty-fifty on unique measurement processor, realize load balancing like this, and be about to data flow and be assigned on the measurement processor fifty-fifty; Ensure simultaneously same distribution of flows on same measurement processor, ensure that a data flow is all the time by a measurement processor processing.Fig. 1 is the balanced implementation method schematic diagram of processor load under the AMP framework in this prior art.
The defective of said method is; Though data flow by relative equilibrium be assigned on the measurement processor because the data message number, message size, the message content that comprise are all widely different in the different data flow, it is inequality that this directly causes measurement processor to detect the speed of data flow; If any the smaller application layer data that even do not comprise of message; Need not carry out IPS and detect, can dispose apace, and the data message that has be http (HTTP) agreement and comprises abundant uri (unified resource identifier) information; There is a large amount of IPS rule to need matching detection one by one, certainly will be consuming time longer.This has caused the load between the measurement processor in fact to be in imbalance, has influenced the performance performance.On the other hand, be that pinned task is distributed between network processing unit and measurement processor, and just in time equilibrium of two kinds of work is handled in network processes and detection, this has also influenced the lifting of overall performance.
Wait other need do the product that data content detects for av (anti-virus), dpi (deep message detection); Its processor is divided into network processing unit and measurement processor equally, and exists above-mentioned network processes equally and detect to handle the problem that two kinds of work can not fine equilibrium.
Summary of the invention
The present invention provides processor load under a kind of AMP framework balanced implementation method and device, with the load that solves AMP framework lower network processor and measurement processor in the prior art can not efficient balance problem.
The present invention provides the balanced implementation method of processor load under a kind of AMP framework, comprising:
For each measurement processor is respectively set up an annular working formation;
After network manager is received data message, search the corresponding measurement processor of this message, if the said annular working formation of this measurement processor less than, then message is transferred to this measurement processor and detects; If the said annular working formation of this measurement processor is full, then with message transfer to a said annular working formation less than measurement processor detect.
Further, after said network manager is received data message, search the corresponding measurement processor of this message, may further comprise the steps:
After network manager is received data message,, then in the record of this connection, find corresponding measurement processor if find corresponding connection; If do not find corresponding connection, then the five-tuple according to this message connects, and confirms corresponding measurement processor according to connecting again.
Further, the said connection that finds correspondence, the method for employing is: the five-tuple according to message calculates the hash value, finds connection according to the hash value again.
Further, the five-tuple of said message comprises source address, destination address, source port, destination interface and agreement.
Further, said according to connecting the measurement processor of confirming correspondence, adopt the hash algorithm to realize.
Further, the balanced implementation method of processor load also comprises under the said AMP framework:
Between network processing unit and measurement processor, carrying out task dynamically adjusts.
Further, the said task that between network processing unit and measurement processor, carries out is dynamically adjusted, and comprising:
When the said annular working formation of each measurement processor is sky, said measurement processor is transferred in the work of being handled by said network processing unit originally handled.
Again further, said script is meant the work of sending message by the work of said network processing unit processing.
The present invention also provides the balanced implement device of processor load under a kind of AMP framework, comprising:
Module is set up in the annular working formation, is used to each measurement processor and respectively sets up an annular working formation;
The measurement processor load balancing module is used for after network manager is received data message, searches the corresponding measurement processor of this message, if the said annular working formation of this measurement processor less than, then message is transferred to this measurement processor and detects; If the said annular working formation of this measurement processor is full, then with message transfer to a said annular working formation less than measurement processor detect.
Further, the balanced implement device of processor load also comprises the task adjusting module under the AMP framework, and this module is used between network processing unit and measurement processor, carrying out task and dynamically adjusts.
Beneficial effect of the present invention is following:
The present invention proposes to measurement processor and set up annular working formation (being the periodic duty formation), thus the dynamic loading condition of each measurement processor of perception;
The present invention proposes the method for measurement processor load balancing, help the performance performance of measurement processor;
The present invention proposes the load-balancing method between network processing unit and the measurement processor, can't balanced problem thereby solved in the prior art between the network processing unit and measurement processor, realized the lifting of data detection system overall performance under the AMP framework.
Description of drawings
Fig. 1 is the balanced implementation method schematic diagram of processor load under the AMP framework in the prior art;
Fig. 2 is the balanced implementation method flow chart of processor load under the AMP framework of the embodiment of the invention;
Fig. 3 is the balanced implementation method schematic diagram of processor load under the AMP framework of the embodiment of the invention;
Fig. 4 is the balanced implement device structure chart of processor load under the AMP framework of the embodiment of the invention.
Embodiment
Below in conjunction with accompanying drawing and embodiment, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, does not limit the present invention.
Method embodiment
According to embodiments of the invention; The balanced implementation method of processor load under a kind of AMP framework is provided; In following examples, be that example specifies, establish in this IPS system with the IPS system; A network processing unit and three measurement processor are arranged, and three measurement processor numberings are followed successively by 0,1 and 2.Fig. 2 is the flow chart of the balanced implementation method of processor load under the AMP framework of the embodiment of the invention; Fig. 3 is the balanced implementation method schematic diagram of processor load under the AMP framework of the embodiment of the invention; Can know in conjunction with Fig. 2 and Fig. 3; The balanced implementation method of processor load under the AMP framework of the embodiment of the invention comprises following processing:
Step 201 is set up the annular working formation.
In the present embodiment, queue length is 512, and promptly maximum can 512 message informations of buffer memory, set up queue heads and rear of queue pointer simultaneously respectively.
The working method of annular working formation is:
After network processing unit receives data message; Search connection according to its source address, destination address, source port, destination interface; If search less than needs and newly set up a syndeton; Then message protocol type, message size, message data address, connection handle information such as (pointing to the pointer of syndeton) are formed a message information structure, join the formation afterbody of the annular working formation of corresponding measurement processor.The queue heads of annular working formation and rear of queue all are dynamic, keep at a distance between end to end, if distance is that zero explanation formation is for empty, if distance is for queue length then queue full is described.Distance adds 1 when from data structure of afterbody adding; Distance subtracts 1 when taking a data structure away from the head; Formation just can not have been fetched data structure again for empty when distance was zero end to end, and queue full when distance is queue length equally end to end just can not add data structure again.Measurement processor is taken out message information successively from queue heads, according to this information the data message content is carried out IPS and detect to handle, and handles the back queue heads and moves after successively.
Step 202 is carried out data flow and is redirected between measurement processor.This step specifically comprises:
1) network processing unit receives data message; Carry out initial analysis; Do not handle direct forwarding for non-TCP (Transmission Control Protocol, transmission control protocol)/UDP (User Data Protocol, UDP) message; Calculate the hash value for the TCP/UDP message according to its five-tuple (source address, destination address, source port, destination interface, agreement), search connection according to the hash value again.All syndetons all are recorded in a hash array the inside; The following hash value that is designated as of array, the member of array is the pointer of syndeton, does array index with the hash value when searching and takes out the array member; Just obtained the syndeton that needs, if the null pointer explanation does not also connect.Usually this hash array has 1,000,000 members, and representative system can be supported 1,000,000 connections at most.Connect direct execution in step 4 if find).
2) for not finding connection, need newly set up a syndeton, the in fact corresponding data flow of syndeton according to the five-tuple of message.
3) the hash value that connects is subtracted 1 divided by the cpu number, get surplusly then, the value that obtains is between 0~2, and this value is exactly that this connects corresponding measurement processor and numbers, and number record in syndeton, need not all be calculated so at every turn.Certainly, adopt the hash algorithm to obtain connecting corresponding measurement processor numbering in this step, the example that concrete algorithm is not limited to provide here can also be other any algorithms that can the numbering of hash value that connects and measurement processor be mapped.
4) take out to connect in the measurement processor numbering of record, find its annular working formation one to one, the pointer end to end of inspection annular working formation, if end to end the pointer gap less than 512 the explanation formation less than, direct execution in step 6).
5) the full annular working formation that then continues the next measurement processor of inspection of formation; If less than; Measurement processor numbering during then change connects; Connection is redirected to this new measurement processor, and whether the pointer distance end to end of judging the measurement processor that this is new then less than 512, like this up to find the annular working formation less than measurement processor.If the annular working formation of all measurement processor is all full, then abandon detecting, directly transmit and should connect.
6) generate a message information structure, comprise message protocol type, message size, message data address, connect handle information such as (pointing to the pointer of syndeton), add the annular working formation, wait for that measurement processor detects from the formation afterbody.
Step 203 is carried out task and is dynamically adjusted between network processing unit and measurement processor.Be that example describes with the work of sending message below, certainly, the task adjustment between network processing unit and the measurement processor is not limited in to be adjusted the task that sends message.This step specifically comprises:
1) it is independent to send message work.
To send the message part program and independently become a module, and make network processing unit to call, measurement processor also can be called simultaneously.When network processing unit called transmission message module, this block code was moved on network processing unit, takies the network processing unit load.When measurement processor was called transmission message module, this block code was moved on measurement processor, takies the measurement processor load.
A switch is set, is in closed condition under the normal condition.Call transmission message module by network processing unit when this switch cuts out, measurement processor is never called.Just in time opposite during switch opens.
2) between network processing unit and measurement processor, carrying out task according to the measurement processor loading condition dynamically adjusts.
A timer is set, the annular working formation of each measurement processor of regular check.
If being queue heads, all annular working formations equate with rear of queue; I.e. all annular working formations are sky, then open switch, measurement processor is born sent message work; Increase the measurement processor load; Alleviate the network processing unit load simultaneously, network processing unit is no longer handled and is sent message work, between two kinds of processors, carries out task and dynamically adjusts.
Certainly, when also can be designed as a clear text number average when all annular working formations, open switch, between network processing unit and measurement processor, carry out task and dynamically adjust less than pre-set threshold value.
More than be example with the IPS system, the implementation method balanced to processor load under the AMP framework of the present invention specifies, the present invention is not limited in the application of IPS system, and can be applied to other data detection systems such as av, dpi equally.
Device embodiment
According to embodiments of the invention; The balanced implement device of processor load under a kind of AMP framework is provided; Fig. 4 is the structural representation of the balanced implement device of processor load under the AMP framework of the embodiment of the invention; As shown in Figure 4, the balanced implement device of processor load comprises under the AMP framework of the embodiment of the invention: module 401, measurement processor load balancing module 402 and task adjusting module 403 are set up in the annular working formation.Below each module of the embodiment of the invention is carried out detailed explanation.
Particularly, the annular working formation is set up module 401 and is used to each measurement processor and respectively sets up an annular working formation.
Measurement processor load balancing module 402 is used for after network manager is received data message, searches the corresponding measurement processor of this message, if the annular working formation of this measurement processor less than, then message is transferred to this measurement processor and detects; If the annular working formation of this measurement processor is full, then with message transfer to an annular working formation less than measurement processor detect.
Task adjusting module 403 is used between network processing unit and measurement processor, carrying out task and dynamically adjusts.
The details of the embodiment of the implement device of processor load equilibrium can repeat no more referring to the description of method embodiment part to the balanced implementation method of processor load under the AMP framework here under the AMP framework of the present invention.
Although be the example purpose, the preferred embodiments of the present invention are disclosed, it also is possible those skilled in the art will recognize various improvement, increase and replacement, therefore, scope of the present invention should be not limited to the foregoing description.

Claims (10)

1. the balanced implementation method of processor load under the asymmetric multiprocessing AMP framework is characterized in that, comprising:
For each measurement processor is respectively set up an annular working formation;
After network manager is received data message, search the corresponding measurement processor of this message, if the said annular working formation of this measurement processor less than, then message is transferred to this measurement processor and detects; If the said annular working formation of this measurement processor is full, then with message transfer to a said annular working formation less than measurement processor detect.
2. the balanced implementation method of processor load is characterized in that under the AMP framework as claimed in claim 1, after said network manager is received data message, searches the corresponding measurement processor of this message, may further comprise the steps:
After network manager is received data message,, then in the record of this connection, find corresponding measurement processor if find corresponding connection; If do not find corresponding connection, then the five-tuple according to this message connects, and confirms corresponding measurement processor according to connecting again.
3. the balanced implementation method of processor load is characterized in that under the AMP framework as claimed in claim 2, the said connection that finds correspondence, and the method for employing is: the five-tuple according to message calculates the hash value, finds connection according to the hash value again.
4. the balanced implementation method of processor load is characterized in that the five-tuple of said message comprises source address, destination address, source port, destination interface and agreement under the AMP framework as claimed in claim 2.
5. the balanced implementation method of processor load is characterized in that under the AMP framework as claimed in claim 2, and is said according to connecting the measurement processor of confirming correspondence, adopts the hash algorithm to realize.
6. like the balanced implementation method of processor load under each described AMP framework in the claim 1 to 5, it is characterized in that, also comprise:
Between network processing unit and measurement processor, carrying out task dynamically adjusts.
7. the balanced implementation method of processor load under the AMP framework as claimed in claim 6 is characterized in that the said task that between network processing unit and measurement processor, carries out is dynamically adjusted, and comprising:
When the said annular working formation of each measurement processor is sky, said measurement processor is transferred in the work of being handled by said network processing unit originally handled.
8. the balanced implementation method of processor load is characterized in that under the AMP framework as claimed in claim 7, and the work that said script is handled by said network processing unit is meant the work of sending message.
9. the balanced implement device of processor load under the asymmetric multiprocessing AMP framework is characterized in that, comprising:
Module is set up in the annular working formation, is used to each measurement processor and respectively sets up an annular working formation;
The measurement processor load balancing module is used for after network manager is received data message, searches the corresponding measurement processor of this message, if the said annular working formation of this measurement processor less than, then message is transferred to this measurement processor and detects; If the said annular working formation of this measurement processor is full, then with message transfer to a said annular working formation less than measurement processor detect.
10. the balanced implement device of processor load is characterized in that also comprise the task adjusting module, this module is used between network processing unit and measurement processor, carrying out task and dynamically adjusts under the AMP framework as claimed in claim 9.
CN2011103622328A 2011-11-15 2011-11-15 Method and device for realizing load balancing of processors under AMP framework Pending CN102404211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103622328A CN102404211A (en) 2011-11-15 2011-11-15 Method and device for realizing load balancing of processors under AMP framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103622328A CN102404211A (en) 2011-11-15 2011-11-15 Method and device for realizing load balancing of processors under AMP framework

Publications (1)

Publication Number Publication Date
CN102404211A true CN102404211A (en) 2012-04-04

Family

ID=45886014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103622328A Pending CN102404211A (en) 2011-11-15 2011-11-15 Method and device for realizing load balancing of processors under AMP framework

Country Status (1)

Country Link
CN (1) CN102404211A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426800A (en) * 2013-08-22 2015-03-18 塔塔顾问服务有限公司 System and method for managing message queues in a peer-to-peer communication network
WO2016058495A1 (en) * 2014-10-16 2016-04-21 Huawei Technologies Co., Ltd. Hardware apparatus and method for multiple processors dynamic asymmetric and symmetric mode switching
CN106445667A (en) * 2016-09-27 2017-02-22 西安交大捷普网络科技有限公司 Method for improving auditing framework CPU load balancing
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system
US10248180B2 (en) 2014-10-16 2019-04-02 Futurewei Technologies, Inc. Fast SMP/ASMP mode-switching hardware apparatus for a low-cost low-power high performance multiple processor system
CN110297661A (en) * 2019-05-21 2019-10-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Parallel computing method, system and medium based on AMP framework DSP operating system
US10928882B2 (en) 2014-10-16 2021-02-23 Futurewei Technologies, Inc. Low cost, low power high performance SMP/ASMP multiple-processor system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576831A (en) * 2008-05-07 2009-11-11 万德洪 Distributed calculating system and realization method
CN101631139A (en) * 2009-05-19 2010-01-20 华耀环宇科技(北京)有限公司 Load balancing software architecture based on multi-core platform and method therefor
CN101778012A (en) * 2009-12-30 2010-07-14 北京天融信科技有限公司 Method for improving IPS detection performance by adopting AMP architecture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576831A (en) * 2008-05-07 2009-11-11 万德洪 Distributed calculating system and realization method
CN101631139A (en) * 2009-05-19 2010-01-20 华耀环宇科技(北京)有限公司 Load balancing software architecture based on multi-core platform and method therefor
CN101778012A (en) * 2009-12-30 2010-07-14 北京天融信科技有限公司 Method for improving IPS detection performance by adopting AMP architecture

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426800A (en) * 2013-08-22 2015-03-18 塔塔顾问服务有限公司 System and method for managing message queues in a peer-to-peer communication network
CN104426800B (en) * 2013-08-22 2018-08-14 塔塔顾问服务有限公司 System and method for the managing message queues in ad hoc communications network
WO2016058495A1 (en) * 2014-10-16 2016-04-21 Huawei Technologies Co., Ltd. Hardware apparatus and method for multiple processors dynamic asymmetric and symmetric mode switching
US9952650B2 (en) 2014-10-16 2018-04-24 Futurewei Technologies, Inc. Hardware apparatus and method for multiple processors dynamic asymmetric and symmetric mode switching
US10248180B2 (en) 2014-10-16 2019-04-02 Futurewei Technologies, Inc. Fast SMP/ASMP mode-switching hardware apparatus for a low-cost low-power high performance multiple processor system
US10928882B2 (en) 2014-10-16 2021-02-23 Futurewei Technologies, Inc. Low cost, low power high performance SMP/ASMP multiple-processor system
US10948969B2 (en) 2014-10-16 2021-03-16 Futurewei Technologies, Inc. Fast SMP/ASMP mode-switching hardware apparatus for a low-cost low-power high performance multiple processor system
CN106445667A (en) * 2016-09-27 2017-02-22 西安交大捷普网络科技有限公司 Method for improving auditing framework CPU load balancing
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system
CN106533978B (en) * 2016-11-24 2019-09-10 东软集团股份有限公司 A kind of network load balancing method and system
CN110297661A (en) * 2019-05-21 2019-10-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Parallel computing method, system and medium based on AMP framework DSP operating system
CN110297661B (en) * 2019-05-21 2021-05-11 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Parallel computing method, system and medium based on AMP framework DSP operating system

Similar Documents

Publication Publication Date Title
CN102404211A (en) Method and device for realizing load balancing of processors under AMP framework
Liu et al. E3:{Energy-Efficient} microservices on {SmartNIC-Accelerated} servers
CN107196870B (en) DPDK-based traffic dynamic load balancing method
Zhang et al. Joint optimization of chain placement and request scheduling for network function virtualization
Wang et al. An intelligent edge-computing-based method to counter coupling problems in cyber-physical systems
Pei et al. Resource aware routing for service function chains in SDN and NFV-enabled network
US8260801B2 (en) Method and system for parallel flow-awared pattern matching
US10079740B2 (en) Packet capture engine for commodity network interface cards in high-speed networks
US9342366B2 (en) Intrusion detection apparatus and method using load balancer responsive to traffic conditions between central processing unit and graphics processing unit
US9274826B2 (en) Methods for task scheduling through locking and unlocking an ingress queue and a task queue
CN102521047B (en) Method for realizing interrupted load balance among multi-core processors
CN107750053A (en) Based on multifactor wireless sensor network dynamic trust evaluation system and method
CN104618304B (en) Data processing method and data handling system
Chaudhary et al. LOADS: Load optimization and anomaly detection scheme for software-defined networks
Papadogiannaki et al. Efficient software packet processing on heterogeneous and asymmetric hardware architectures
CN104394163A (en) Safety detection method based on Web application
Hu et al. Towards efficient server architecture for virtualized network function deployment: Implications and implementations
Haagdorens et al. Improving the performance of signature-based network intrusion detection sensors by multi-threading
Lee et al. The impact of container virtualization on network performance of IoT devices
Adeppady et al. Reducing microservices interference and deployment time in resource-constrained cloud systems
CN103441952A (en) Network data package processing method based on multi-core or many-core embedded processor
WO2017185924A1 (en) Load balancing method and apparatus for signal processing module
CN109308210A (en) A method of optimizing NFV on multiple-core server and forwards service chaining performance
de Oliveira et al. A Real-time and Energy-aware Framework for Data Stream Processing in the Internet of Things.
Ni et al. A SmartNIC-based Load Balancing and Auto Scaling Framework for Middlebox Edge Server

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 100000, Room 301, north 3, building 1, 3 East Road, Haidian District, Beijing

Applicant after: BEIJING TOPSEC TECHNOLOGY CO., LTD.

Address before: 100000, Room 301, north 3, building 1, 3 East Road, Haidian District, Beijing

Applicant before: Beijing heaven melts letter Science Technologies Co., Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: BEIJING HEAVEN MELTS LETTER SCIENCE TECHNOLOGIES CO., LTD. TO: BEIJING TOPSEC TECHNOLOGY CO., LTD.

C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 100085 Beijing East Road, No. 1, building No. 301, building on the north side of the floor, room 3, room 3

Applicant after: Beijing heaven melts letter Science Technologies Co., Ltd.

Address before: 100085 Beijing East Road, No. 1, building No. 301, building on the north side of the floor, room 3, room 3

Applicant before: BEIJING TOPSEC TECHNOLOGY CO., LTD.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: BEIJING TOPSEC TECHNOLOGY CO., LTD. TO: BEIJING HEAVEN MELTS LETTER SCIENCE TECHNOLOGIES CO., LTD.

C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120404