CN103677760A - Event parallel controller based on Openflow and event parallel processing method thereof - Google Patents

Event parallel controller based on Openflow and event parallel processing method thereof Download PDF

Info

Publication number
CN103677760A
CN103677760A CN201310647876.0A CN201310647876A CN103677760A CN 103677760 A CN103677760 A CN 103677760A CN 201310647876 A CN201310647876 A CN 201310647876A CN 103677760 A CN103677760 A CN 103677760A
Authority
CN
China
Prior art keywords
flow
state
base
task
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310647876.0A
Other languages
Chinese (zh)
Other versions
CN103677760B (en
Inventor
刘轶
宋平
刘驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaixi Beijing Information Technology Co ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201310647876.0A priority Critical patent/CN103677760B/en
Publication of CN103677760A publication Critical patent/CN103677760A/en
Application granted granted Critical
Publication of CN103677760B publication Critical patent/CN103677760B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an event parallel controller based on Openflow and an event parallel processing method thereof. According to the method, transceiving of Openflow information and processing of Openflow events are separated, and extra computation threads are utilized to accelerate processing of the Openflow events. A controller after an application is started is utilized to establish links with a switch, the links are evenly distributed to a plurality of I/O threads, and transceiving of information on each link is processed by the unique I/O thread. The application triggers the corresponding Openflow event after receiving the Openflow information, and generates processing tasks on steam objects and state objects according to the type of the event, and the processing tasks can be processed by the different threads. In the process of steam event processing, subtasks can be generated dynamically and carried out by the threads. For the shared state, processing is carried out by the unique state thread. Compared with an existing parallel processing method of the Openflow events, the method has better performance expandability and simpler data access modes.

Description

A kind of event parallel controller and event method for parallel processing thereof based on Openflow
Technical field
The present invention relates to a kind of Openflow controller, refer to a kind of software defined network field Openflow controller and method for parallel processing to Openflow controller internal event used, particularly the method for parallel processing of Openflow stream event handler procedure inside.
Background technology
2008, OpenFlow technology was suggested first.Its thought is that the data retransmission in legacy network devices and two functional modules of route control are separated, and utilizes centralized controller by standardized interface, various network device to be managed and configured.OpenFlow technology has caused the extensive concern of industry, becomes very popular in recent years technology.Because Openflow technology is that network brings programmability flexibly, so this technology has been widely used among the multiple network such as campus network, wide area network, mobile network and data center network.
At disclosed < < OpenFlow Switch Specification > > on Dec 31st, 2009, Open Networking Foundation tissue, saves at the 4.1st of this document the type of having introduced OpenFlow message.Openflow message includes controller-to-switch(and translates: controller is to the message of switch transmission), Asynchronous(translates: asynchronous message) and Symmetric(translate: symmetrical message).Wherein in asynchronous message, including Packet-in(translates: flow to and reach message), Flow-Removed(translates: drift except message), Port-status(translates: port status message) and Error(translate: error message).
In the Journal of Software on March 29th, 2013, disclose the SDN technology > > of < < based on OpenFlow, the people such as left high official position deliver.In literary composition, disclosing OpenFlow network is mainly comprised of OpenFlow switch, controller two parts.OpenFlow switch shows forwarding data bag according to stream, is representing data retransmission plane; Controller is realized management and control function by whole network view, and its steering logic represents control plane.The processing unit of each OpenFlow switch forms by flowing table, and each stream table is comprised of many stream list items, and stream list item represents and forwards rule.The packet that enters switch shows to obtain corresponding operation by inquiry stream.Controller is safeguarded the essential information of whole network by maintaining network view (network view), as topology, network element and the service that provides etc.Operate in application program on controller by calling the global data in network view, and then operation OpenFlow switch manages and controls to whole network.
The feature of Openflow technology makes the treatment effeciency of Openflow controller end become the key that can network normally move.The treatment effeciency of original single-threaded controller can not meet the processing demands of extensive Openflow network far away.Therefore, prior art is utilized multithreading, in controller inside, processes concurrently Openflow event, improves the treatment effeciency of controller.
But existing Openflow event method for parallel processing, utilizing many nuclear environments to larger, when the Openflow network of behavior complexity is controlled, there is the scalability problem of handling property: (1) stream event handler procedure is not supported parallel work-flow, cannot meet the computation process that time complexity is high; (2) increase the treatment effeciency that thread is difficult to effectively improve stream event; (3) in Openflow event handler procedure, to sharing the access of data, there is coupling, affect performance extensibility.
The present invention is directed to the problems referred to above, inner at Openflow controller, for Openflow event, especially Openflow stream event, has proposed a kind of new event parallel controller and event method for parallel processing thereof.
Summary of the invention
One of object of the present invention is to provide a kind of event parallel controller based on Openflow, this controller is to utilize many nuclear environments, under extensive Openflow network scenarios, utilizing I/O thread parallel receives and dispatches Openflow message, utilize computational threads to accelerate the processing of Openflow event, increase the parallel support of stream event handler procedure inside, strengthen the computing power of Openflow controller, improve performance extensibility.
Two of object of the present invention is to propose a kind of event method for parallel processing based on Openflow, and the method is used a plurality of thread parallels Openflow message is received and dispatched; After receiving Openflow message, trigger corresponding Openflow event; For stream event and corresponding disposal route thereof, generate the Processing tasks to this stream event, by stream-thread parallel, carried out; For other types event and corresponding disposal route thereof, generate the Processing tasks for shared state, by state-thread parallel, carried out; The processing procedure of stream event is inner, can produce dynamically subtask, and by the form of task stealing, a plurality of threads can be processed same stream event concurrently.
The present invention is a kind of event parallel controller based on Openflow, and this controller includes stream processing module (1), state processing module (2) and Openflow message and distributes control module (3);
Openflow message distributes control module (3) first aspect to adopt asynchronous Non-Blocking I/O model from the reception buffer zone of link, to receive the Openflow message that Openflow switch (4) sends; In described Openflow message, include Packet-in message, Flow-Removed message, Port-status message and Error message.
Openflow message distributes control module (3) second aspect will flow Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Send to the local task queue Q of main thread of stream processing module (1) zin;
Described stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Obtain and be: (A) first according to Packet-in message trigger Packet-in event; Then according to Packet-in event, generate the flow object FLOW of Base_flow structure base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Packet-in event
Figure BDA0000430274470000033
(B) first according to Flow-Removed message trigger Flow-Removed event; Then according to Flow-Removed event, generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Flow-Removed event FA Flow - Removed FLOW Base _ flow .
Openflow message distributes control module (3) third aspect by state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Send to the access task queue of state processing module (2) P state STAT E Base _ state = { P 1 , P 2 , . . . , P s } In;
Described state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Obtain and be: (A) first according to Port-status message trigger Port-status event; Then according to Port-status event, generate the status object STATE of Base_state structure base_state={ S 1, S 2..., S sprocessing tasks, Port-status state processing task is designated as (B) first according to Error message trigger Error event; Then according to Error event, generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks, Error state processing task is designated as SA Error STAT E Base _ state .
Openflow message distributes control module (3) fourth aspect to receive the controller-to-switch message of stream processing module (1) output;
Openflow message distributes control module (3) the 5th aspect to adopt asynchronous Non-Blocking I/O model from message-thread TH 3={ C 1, C 2..., C cin under link
Figure BDA00004302744700000310
transmission buffer zone in to Openflow switch (4) output controller-to-switch message.
Stream processing module (1) first aspect is used for receiving the stream Processing tasks that Openflow message is distributed control module (3) output TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } ;
Stream processing module (1) second aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Be saved in the local task queue Q of main thread zin;
Stream processing module (1) third aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Mode by poll sends to the local task queue of computational threads
Figure BDA00004302744700000416
in;
Stream processing module (1) fourth aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate flow object FLOW base_flow={ F 1, F 2..., F fprocessing tasks, be designated as flow object subtask
Figure BDA0000430274470000045
described in inciting somebody to action
Figure BDA0000430274470000046
add to Q T H 1 = { Q 1 , Q 2 , . . . , Q a } In;
Stream processing module (1) the 5th aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate status object STATE base_state={ S 1, S 2..., S sprocessing tasks, be designated as status object subtask
Figure BDA0000430274470000048
according to described the value of global attribute, judge; If global is true, represent that this state is overall shared state, by described give state processing module (2), and the task of waiting status processing module (2) completes message STA 2-1; Otherwise, if global is not true, represent that this state is local shared state, flows described in the generation in processing module (1)
Figure BDA00004302744700000411
thread directly carry out;
The task load that computational threads is carried out by the mode of task stealing in stream processing module (1) the 6th aspect is balanced.
Stream processing module (1) the 7th aspect output controller-to-switch message is to Openflow Message Distribution Module (3).Be written to the controller-to-switch message synchronization that computational threads is exported needs message-thread TH 3={ C 1, C 2..., C cin under link
Figure BDA00004302744700000412
transmission buffer zone in.
State processing module (2) first aspect receives the state processing task that Openflow message module (3) is sent TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , And will TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue in;
State processing module (2) second aspect receives the state processing task that stream processing module (1) is sent
Figure BDA0000430274470000051
and will
Figure BDA0000430274470000052
be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue in;
State processing module (2) third aspect state-thread TH 2={ B 1, B 2..., B bin B 1from
Figure BDA0000430274470000054
in extract and belong to B 1access task queue P state B 1 = { P 1 , P 2 , . . . , P s } ; Then B 1mode by poll is carried out P state B 1 = { P 1 , P 2 , . . . , P s } In task; When complete
Figure BDA0000430274470000057
after, sending to stream processing module 1 of task completes message
Figure BDA0000430274470000058
State-thread TH 2={ B 1, B 2..., B bin B 2from
Figure BDA0000430274470000059
in extract and belong to B 2access task queue then B 2mode by poll is carried out in task; When complete
Figure BDA00004302744700000512
after, sending to stream processing module (1) of task completes message
Figure BDA00004302744700000513
State-thread TH 2={ B 1, B 2..., B bin B bfrom
Figure BDA00004302744700000514
in extract and belong to B baccess task queue
Figure BDA00004302744700000515
then B bmode by poll is carried out
Figure BDA00004302744700000516
in task; When complete
Figure BDA00004302744700000517
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700000518
Sending to stream processing module (1) for state processing module (2) fourth aspect of task completes massage set and is designated as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } .
The advantage that the present invention is directed to the event method for parallel processing of Openflow controller is:
1. the present invention is inner at Openflow controller, event handling and information receiving and transmitting are separated from each other, by using computational threads, Openflow event is carried out to parallel processing, parallel support of the inner increase of stream event handler procedure, can effectively strengthen the computing power of Openflow controller, improve the extensibility of controller handling property.
2. the present invention uses state-thread to process uniquely shared state, can use non-exclusive mode to conduct interviews to sharing data, has simplified the access to shared data in event handler procedure, has improved to a certain extent access efficiency.
Accompanying drawing explanation
Fig. 1 is the structured flowchart that the present invention is based on the event parallel controller of Openflow.
Fig. 2 is the parallel processing process schematic diagram of Openflow controller of the present invention inside.
Fig. 3 is the speed-up ratio comparison diagram based on switch program.
Fig. 4 is the speed-up ratio comparison diagram based on QPAS algorithm.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Shown in Figure 1, a kind of event parallel controller based on Openflow of the present invention, this controller includes stream processing module 1, state processing module 2 and Openflow message and distributes control module 3; For convenient below statement, controller of the present invention is referred to as POCA.POCA of the present invention and existing Openflow controller are used in conjunction with, and are embedded in Openflow network architecture.
In the present invention, POCA translates the Packet-in(in Openflow message: flow to and reach message), Flow-Removed(translates: drift except message), Port-status(translates: port status message) translate with Error(: error message) carry out associated processing.
In the present invention, Packet-in(translates: flow to and reach message) corresponding event is called Packet-in event; Flow-Removed(translates: drift except message) corresponding event is called Flow-Removed event; Port-status(translates: port status message) corresponding event is called Port-status event; Error(translates: error message) corresponding event is called Error event.
In the present invention, the flow object FLOW of Base_flow structure base_flow={ F 1, F 2..., F fmiddle F 1the flow object that represents the first type, F 2the flow object that represents the second type, F fthe flow object that represents last type, the identification number that f is flow object, for convenience of description, below by F fflow object also referred to as any one type.
In the present invention, the status object STATE of Base_state structure base_state={ S 1, S 2..., S smiddle S 1the status object that represents the first type, S 2the status object that represents the second type, S sthe status object that represents last type, s represents the identification number of status object, for convenience of description, below by S sstatus object also referred to as any one type.Any status object be to there being unique access task queue, the status object S of the first type 1corresponding access task queue is designated as P 1(referred to as first access task queue P 1), the status object S of the second type 2corresponding access task queue is designated as P 2(referred to as second access task queue P 2), the status object S of last type scorresponding access task queue is designated as P s(referred to as last access task queue P s).Access task queue adopts set expression-form to be
In the present invention, the thread in stream processing module 1 is designated as stream-thread TH 1={ A 1, A 2..., A z..., A amiddle A 1represent first thread in stream processing module 1, A 2represent second thread in stream processing module 1, A zrepresent z thread in stream processing module 1, A arepresent last thread in stream processing module 1, a represents to flow the thread ID in processing module 1, for convenience of description, and below by A aalso referred to as any one thread.As stream-thread TH 1={ A 1, A 2..., A z..., A ain some thread A zafter main thread, all the other threads are computational threads TH 1={ A 1, A 2..., A a.Any thread be to there being unique local task queue, first thread A 1corresponding local task queue is designated as Q 1(referred to as first local task queue Q 1), second thread A 2corresponding local task queue is designated as Q 2(referred to as second local task queue Q 2), z thread A zcorresponding local task queue is designated as Q z(referred to as z local task queue Q z, also referred to as the local task queue Q of main thread z), last thread A acorresponding local task queue is designated as Q a(referred to as last local task queue Q a).Computational threads TH 1={ A 1, A 2..., A acorresponding local task queue set is designated as Q T H 1 = { Q 1 , Q 2 , . . . , Q a } .
In the present invention, the thread in state processing module 2 is designated as state-thread TH 2={ B 1, B 2..., B bmiddle B 1represent first thread in state processing module 2, B 2represent second thread in state processing module 2, B brepresent last thread in state processing module 2, b represents the thread ID in state processing module 2, for convenience of description, and below by B balso referred to as any one thread.For the status object STATE in state processing module 2 base_state={ S 1, S 2..., S sthat mean allocation is at state-thread TH 2={ B 1, B 2..., B bon, any one state-thread B bon will process a plurality of access task queues, by described state-thread B bthe access task queue of processing is designated as P state B b = { P 1 , P 2 , . . . , P s } , And P state B b &Element; P state STAT E Base _ state ; Any access task queue P swill be to there being a unique thread B b.
In the present invention, Openflow message distributes the thread in control module 3 to be designated as message-thread TH 3={ C 1, C 2..., C cmiddle C 1represent first thread in Openflow message distribution control module 3, C 2represent second thread in Openflow message distribution control module 3, C crepresent last thread in Openflow message distribution control module 3, c represents the thread ID in Openflow message distribution control module 3, for convenience of description, and below by C calso referred to as any one thread.
Openflow message distributes control module 3 to be designated as with linking of a plurality of Openflow switches 4
Figure BDA0000430274470000073
sV represents Openflow controller, and SW represents the set of Openflow switch, SW={D 1, D 2..., D dmiddle D 1represent first Openflow switch, D 2represent second Openflow switch, D drepresent last Openflow switch, d represents the identification number of Openflow switch, for convenience of description, and below by D dalso referred to as any one Openflow switch.Article one, link is designated as second link is designated as
Figure BDA0000430274470000082
the last item link is designated as
Figure BDA0000430274470000083
(also referred to as bar link one by one arbitrarily
Figure BDA0000430274470000084
), CON SW SV = { CON D 1 SV , CON D 2 SV , . . . , CON D d SV } . For Openflow message, distribute linking between control module 3 and Openflow switch 4
Figure BDA0000430274470000086
that mean allocation is at message-thread TH 3={ C 1, C 2..., C con, and bar link one by one arbitrarily will be to there being a unique thread C c.
(1) Openflow message is distributed control module 3
Shown in Figure 1, Openflow message distributes control module 3 first aspects to adopt asynchronous Non-Blocking I/O model from the reception buffer zone of link, to receive the Openflow message that Openflow switch 4 sends;
In described Openflow message, include Packet-in message, Flow-Removed message, Port-status message and Error message.
Openflow message distributes control module 3 second aspects will flow Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Send to the local task queue Q of main thread of stream processing module 1 zin;
In the present invention, described stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Obtain and be: (A) first according to Packet-in message trigger Packet-in event; Then according to Packet-in event, generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Packet-in event
Figure BDA00004302744700000810
(B) first according to Flow-Removed message trigger Flow-Removed event; Then according to Flow-Removed event, generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Flow-Removed event
Figure BDA00004302744700000811
In the present invention, the flow object FLOW of Base_flow structure base_flow={ F 1, F 2..., F fmiddle F 1the flow object that represents the first type, F 2the flow object that represents the second type, F fthe flow object that represents last type, the identification number that f is flow object, for convenience of description, below by F fflow object also referred to as any one type.
Table 1Base_flow class
Figure BDA0000430274470000091
Openflow message distributes control module 3 third aspect by state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Send to the access task queue of state processing module 2 P state STAT E Base _ state = { P 1 , P 2 , . . . , P s } In;
In the present invention, described state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Obtain and be: (A) first according to Port-status message trigger Port-status event; Then according to Port-status event, generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks, Port-status state processing task is designated as
Figure BDA0000430274470000095
(B) first according to Error message trigger Error event; Then according to Error event, generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks, Error state processing task is designated as
Figure BDA0000430274470000096
Table 2Base_state class
Openflow message distributes control module 3 fourth aspects to receive the controller-to-switch message of stream processing module 1 output;
Openflow message distributes control module 3 the 5th aspect to adopt asynchronous Non-Blocking I/O model from message-thread TH 3={ C 1, C 2..., C cin under link
Figure BDA0000430274470000102
transmission buffer zone in to Openflow switch 4 output controller-to-switch message.
(2) stream processing module 1
Shown in Figure 1, stream processing module 1 first aspect is distributed the stream Processing tasks of control module 3 outputs for receiving Openflow message TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } ;
Second aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Be saved in the local task queue Q of main thread zin;
The third aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } By the mode of poll, send to the local task queue of computational threads of stream processing module 1
Figure BDA00004302744700001012
in;
Fourth aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate flow object FLOW base_flow={ F 1, F 2..., F fprocessing tasks, be designated as flow object subtask
Figure BDA0000430274470000107
described in inciting somebody to action
Figure BDA0000430274470000108
add to Q T H 1 = { Q 1 , Q 2 , . . . , Q a } In;
The 5th aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate status object STATE base_state={ S 1, S 2..., S sprocessing tasks, be designated as status object subtask
Figure BDA00004302744700001010
according to described
Figure BDA00004302744700001011
the value of global attribute (as shown in table 2 the 4th row), judge; If global is true, represent that this state is overall shared state, by described
Figure BDA0000430274470000111
give state processing module 2, and the task of waiting status processing module 2 completes message STA 2-1; Otherwise, if global is not true, represent that this state is local shared state, flows described in the generation in processing module 1
Figure BDA0000430274470000112
thread directly carry out;
Computational threads in the 6th aspect stream processing module 1, by the mode of task stealing, is carried out load balancing.
The relevant public information of described " task stealing mode ":
Article name: Scheduling multithreaded computations by work stealing
Author: Robert D.Blumofe Univ.of Texas at Austin, Austin;
Charles?E.Leiserson?MIT?Lab?for?Computer?Science,Cambridge,MA;
Deliver information: Journal of the ACM (JACM) JACM Homepage archive
Volume46Issue5,Sept.1999,Pages720-748,ACM?New?York,NY,USA。
The task of in the present invention, receiving state processing module 2 output when stream processing module 1 completes message
Figure BDA0000430274470000113
afterwards, first judge whether to exist computational threads to wait for this message, if existed, order waits for the arrival of news STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } Computational threads continue to carry out; Otherwise, ignore this message.
Stream processing module 1 the 7th aspect output controller-to-switch message is to Openflow Message Distribution Module 3.In the present invention, be written to the controller-to-switch message synchronization that computational threads is exported needs message-thread TH 3={ C 1, C 2..., C cin under link
Figure BDA0000430274470000115
transmission buffer zone in.
(3) state processing module 2
Shown in Figure 1, state processing module 2 first aspects receive the state processing task that Openflow message module 3 is sent TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , And will TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue
Figure BDA0000430274470000118
in;
State processing module 2 second aspects receive the state processing task that stream processing module 1 is sent
Figure BDA0000430274470000121
and will
Figure BDA0000430274470000122
be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue
Figure BDA0000430274470000123
in;
State processing module 2 third aspect state-thread TH 2={ B 1, B 2..., B bin B 1from
Figure BDA0000430274470000124
in extract and belong to B 1access task queue P state B 1 = { P 1 , P 2 , . . . , P s } ; Then B 1mode by poll is carried out P state B 1 = { P 1 , P 2 , . . . , P s } In task; When complete
Figure BDA0000430274470000127
after, sending to stream processing module 1 of task completes message
Figure BDA0000430274470000128
State-thread TH 2={ B 1, B 2..., B bin B 2from
Figure BDA0000430274470000129
in extract and belong to B 2access task queue
Figure BDA00004302744700001210
then B 2mode by poll is carried out
Figure BDA00004302744700001211
in task; When complete
Figure BDA00004302744700001212
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700001213
State-thread TH 2={ B 1, B 2..., B bin B bfrom
Figure BDA00004302744700001214
in extract and belong to B baccess task queue
Figure BDA00004302744700001215
then B bmode by poll is carried out
Figure BDA00004302744700001216
in task; When complete
Figure BDA00004302744700001217
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700001218
Sending to stream processing module 1 for state processing module 2 fourth aspects of task completes massage set and is designated as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } .
Shown in Figure 2, adopt the event parallel controller based on Openflow of the present invention's design to carry out the parallel processing of Openflow event, it comprises following parallel processing step:
Step 1: the parallel transmitting-receiving of Openflow message, triggers corresponding Openflow event
There is unique link in each switch in the event parallel controller based on Openflow of the present invention.
Link in process of establishing message-thread TH 3={ C 1, C 2..., C cin first thread C 1be responsible for Openflow switch SW={D 1, D 2..., D dlinking request monitor.After receiving linking request, establish the link CON SW SV = { CON D 1 SV , CON D 2 SV , . . . , CON D d SV } , And will link CON SW SV Distribute to fifty-fifty message-thread TH 3.Bar link one by one arbitrarily
Figure BDA00004302744700001222
by a unique thread C cprocess.
In Openflow message sink process, message-thread TH 3={ C 1, C 2..., C cadopt asynchronous Non-Blocking I/O model to receive Openflow switch SW={D from the reception buffer zone of link 1, D 2..., D dthe Openflow message that sends.According to Packet-in message trigger Packet-in event; According to Flow-Removed message trigger Flow-Removed event; According to Port-status message trigger Port-status event; According to Error message trigger Error event.
In Openflow message transmitting process, message-thread TH 3={ C 1, C 2..., C cadopt asynchronous Non-Blocking I/O model from message-thread TH 3={ C 1, C 2..., C cin under link
Figure BDA0000430274470000131
transmission buffer zone in to Openflow switch SW={D 1, D 2..., D doutput controller-to-switch message.To link
Figure BDA0000430274470000132
the action need of transmission buffer zone carry out synchronously.
In the present invention, the parallel transmitting-receiving of Openflow message has utilized a plurality of message-thread parallels Openflow message has been received and dispatched, and each Openflow switch link is processed by unique message-thread, does not have mutual exclusion between message-thread.Therefore, can improve to greatest extent Openflow information receiving and transmitting efficiency.In addition, utilize asynchronous Non-Blocking I/O model can reduce the interference of information receiving and transmitting process and message processing procedure, thereby further improved information receiving and transmitting efficiency.
Step 2: the parallel processing of Openflow event
For Packet-in event and Flow-Removed event, first generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f, then according to start method in Base_flow structure, generate stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } , Finally by this task TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Send to the local task queue Q of main thread in stream processing module 1 zin;
For stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In stream processing module 1, main thread A zwill TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } By the mode of poll, send to the local task queue of computational threads of stream processing module 1
Figure BDA00004302744700001313
in; Computational threads TH 1={ A 1, A 2..., A acarry out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Remved FLOW Base _ flow } In specific tasks; And dynamically generate flow object subtask
Figure BDA0000430274470000138
described in inciting somebody to action add to Q T H 1 = { Q 1 , Q 2 , . . . , Q a } In;
Computational threads TH 1={ A 1, A 2..., A acarry out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate status object subtask
Figure BDA00004302744700001311
according to described the value of global attribute (as shown in table 2 the 4th row), judge; If global is true, represent that this state is overall shared state, by described
Figure BDA0000430274470000141
give state processing module 2, and the task of waiting status processing module 2 completes message STA 2-1; Otherwise, if global is not true, represent that this state is local shared state, flows described in the generation in processing module 1
Figure BDA0000430274470000142
thread directly carry out; Computational threads in stream processing module 1, by the mode of task stealing, is carried out load balancing.
For Port-status event and Error event, first generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , Then by this task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Send to the access task queue of state processing module 2 P state STATE Base _ state = { P 1 , P 2 , . . . , P s } In.
For task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } And task TASK STATE Base _ state sub , In state processing module 2, state-thread TH 2={ B 1, B 2..., B bin B 1from in extract and belong to B 1access task queue P state B 1 = { P 1 , P 2 , . . . , P s } ; Then B 1mode by poll is carried out P state B 1 = { P 1 , P 2 , . . . , P s } ; In task; When complete
Figure BDA00004302744700001411
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700001412
state-thread TH 2={ B 1, B 2..., B bin B 2from
Figure BDA00004302744700001413
in extract and belong to B 2access task queue
Figure BDA00004302744700001414
then B 2mode by poll is carried out
Figure BDA00004302744700001415
in task; When complete
Figure BDA00004302744700001416
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700001417
state-thread TH 2={ B 1, B 2..., B bin B bfrom
Figure BDA00004302744700001418
in extract and belong to B baccess task queue P state B b = { P 1 , P 2 , . . . , P s } ; Then B bmode by poll is carried out P state B b = { P 1 , P 2 , . . . , P s } In task; When complete
Figure BDA00004302744700001421
after, sending to stream processing module 1 of task completes message
Figure BDA00004302744700001422
In the present invention, by receiving after Openflow message, trigger corresponding Openflow event, and produce the Processing tasks to flow object and status object according to the type of event, transfer to different threads to carry out parallel processing.In stream event handler procedure, can Dynamic Generation subtask, utilize the mode of task stealing to be processed concurrently by a plurality of computational threads, improve stream event handling efficiency.In controller, each shared state is processed by unique state thread, has not only simplified the access to shared state, and can improve to a certain extent the treatment effeciency of computational threads convection current event.Utilize the event method for parallel processing in the present invention can make Openflow controller there is better performance extensibility.
checking embodiment
Event parallel processing system (PPS) POCA based on Openflow message, when carrying out the little switch program of calculated amount, compares other Openflow controllers, and along with increasing of number of threads, when number of threads is greater than 8, POCA has higher speed-up ratio.Reason is the switch link of each IO thread process under separately, does not have each other interference, has therefore improved handling property.Speed-up ratio comparison diagram based on switch program shown in Figure 3.
Speed-up ratio comparison diagram based on QPAS algorithm shown in Figure 4.When the larger QPAS algorithm of computational processing, along with increasing of number of threads, compare with NOX, POCA has higher speed-up ratio.Reason is: POCA, has improved the treatment effeciency of each event, and then improved whole treatment effeciency by the computational threads acceleration that walks abreast in event handler procedure inside.
The invention discloses a kind of event parallel controller and event method for parallel processing thereof based on Openflow, the method is separated the processing of the transmitting-receiving of Openflow message and Openflow event, utilize extra computational threads to Openflow event handling accelerate.Controller after application is opened will be set up and the linking of switch, and link is given to a plurality of I/O threads fifty-fifty, and each chains the transmitting-receiving of message by unique I/O thread process.Be applied in and receive after Openflow message, trigger corresponding Openflow event, and produce the Processing tasks to flow object and status object according to the type of event, transfer to different threads to process.In stream event handler procedure, can Dynamic Generation subtask, and carried out by a plurality of thread parallels.To shared state, use unique state thread to process.The inventive method has better performance extensibility and simpler data access mode with respect to the method for parallel processing of existing Openflow event.

Claims (6)

1. the event parallel controller based on Openflow, is characterized in that: this controller includes stream processing module (1), state processing module (2) and Openflow message and distributes control module (3);
Openflow message distributes control module (3) first aspect to adopt asynchronous Non-Blocking I/O model from the reception buffer zone of link, to receive the Openflow message that Openflow switch (4) sends; In described Openflow message, include Packet-in message, Flow-Removed message, Port-status message and Error message.
Openflow message distributes control module (3) second aspect will flow Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Send to the local task queue Q of main thread of stream processing module (1) zin;
Described stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Obtain and be: (A) first according to Packet-in message trigger Packet-in event; Then according to Packet-in event, generate the flow object FLOW of Base_flow structure base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Packet-in event
Figure FDA0000430274460000013
(B) first according to Flow-Removed message trigger Flow-Removed event; Then according to Flow-Removed event, generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f; Finally according to start method in Base_flow structure, generate stream Processing tasks corresponding to described Flow-Removed event FA Flow - Removed FLOW Base _ flow .
Openflow message distributes control module (3) third aspect by state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Send to the access task queue of state processing module (2) P state STAT E Base _ state = { P 1 , P 2 , . . . , P s } In;
Described state processing task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Obtain and be: (A) first according to Port-status message trigger Port-status event; Then according to Port-status event, generate the status object STATE of Base_state structure base_state={ S 1, S 2..., S sprocessing tasks, Port-status state processing task is designated as (B) first according to Error message trigger Error event; Then according to Error event, generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks, Error state processing task is designated as SA Error STAT E Base _ state .
Openflow message distributes control module (3) fourth aspect to receive the controller-to-switch message of stream processing module (1) output;
Openflow message distributes control module (3) the 5th aspect to adopt asynchronous Non-Blocking I/O model from message-thread TH 3={ C 1, C 2..., C cin under link
Figure FDA0000430274460000021
transmission buffer zone in to Openflow switch (4) output controller-to-switch message.
Stream processing module (1) first aspect is used for receiving the stream Processing tasks that Openflow message is distributed control module (3) output TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } ;
Stream processing module (1) second aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Be saved in the local task queue Q of main thread zin;
Stream processing module (1) third aspect will TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Mode by poll sends to the local task queue of computational threads
Figure FDA00004302744600000216
in;
Stream processing module (1) fourth aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate flow object FLOW base_flow={ F 1, F 2..., F fprocessing tasks, be designated as flow object subtask
Figure FDA0000430274460000026
described in inciting somebody to action
Figure FDA0000430274460000027
add to Q T H 1 = { Q 1 , Q 2 , . . . , Q a } In;
Stream processing module (1) the 5th aspect is carried out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate status object STATE base_state={ S 1, S 2..., S sprocessing tasks, be designated as status object subtask
Figure FDA0000430274460000029
according to described
Figure FDA00004302744600000210
the value of global attribute, judge; If global is true, represent that this state is overall shared state, by described
Figure FDA00004302744600000211
give state processing module (2), and the task of waiting status processing module (2) completes message STA 2-1; Otherwise, if global is not true, represent that this state is local shared state, flows described in the generation in processing module (1)
Figure FDA00004302744600000212
thread directly carry out;
The task load that computational threads is carried out by the mode of task stealing in stream processing module (1) the 6th aspect is balanced.
Stream processing module (1) the 7th aspect output controller-to-switch message is to Openflow Message Distribution Module (3).Be written to the controller-to-switch message synchronization that computational threads is exported needs message-thread TH 3={ C 1, C 2..., C cin under link
Figure FDA00004302744600000218
transmission buffer zone in.
State processing module (2) first aspect receives the state processing task that Openflow message module (3) is sent TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , And will TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue in;
State processing module (2) second aspect receives the state processing task that stream processing module (1) is sent and will
Figure FDA0000430274460000032
be saved in status object STATE base_state={ S 1, S 2..., S saccess task queue
Figure FDA0000430274460000033
in;
State processing module (2) third aspect state-thread TH 2={ B 1, B 2..., B bin B 1from
Figure FDA0000430274460000034
in extract and belong to B 1access task queue P state B 1 = { P 1 , P 2 , . . . , P s } ; Then B 1mode by poll is carried out P state B 1 = { P 1 , P 2 , . . . , P s } In task; When complete
Figure FDA0000430274460000037
after, sending to stream processing module 1 of task completes message
State-thread TH 2={ B 1, B 2..., B bin B 2from
Figure FDA0000430274460000039
in extract and belong to B 2access task queue
Figure FDA00004302744600000310
then B 2mode by poll is carried out in task; When complete
Figure FDA00004302744600000312
after, sending to stream processing module (1) of task completes message
State-thread TH 2={ B 1, B 2..., B bin B bfrom
Figure FDA00004302744600000314
in extract and belong to B baccess task queue
Figure FDA00004302744600000315
then B bmode by poll is carried out in task; When complete
Figure FDA00004302744600000317
after, sending to stream processing module 1 of task completes message
Figure FDA00004302744600000318
Sending to stream processing module (1) for state processing module (2) fourth aspect of task completes massage set and is designated as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } .
2. the event parallel controller based on Openflow according to claim 1, is characterized in that: this controller and existing Openflow controller are used in conjunction with, and are embedded in Openflow network architecture.
3. the event parallel controller based on Openflow according to claim 1, is characterized in that: described Base_flow structure is as following table:
Figure FDA00004302744600000320
4. the event parallel controller based on Openflow according to claim 1, is characterized in that: described Base_state structure is as following table:
5. the event method for parallel processing carrying out according to the event parallel controller based on Openflow claimed in claim 1, is characterized in that there is the following step:
Step 1: the parallel transmitting-receiving of Openflow message, triggers corresponding Openflow event
In event parallel controller based on Openflow there is unique link in each switch.
Link in process of establishing message-thread TH 3={ C 1, C 2..., C cin first thread C 1be responsible for Openflow switch SW={D 1, D 2..., D dlinking request monitor.After receiving linking request, establish the link CON SW SV = { CON D 1 SV , CON D 2 SV , . . . , CON D d SV } , And will link
Figure FDA0000430274460000044
distribute to fifty-fifty message-thread TH 3.Bar link one by one arbitrarily
Figure FDA0000430274460000045
by a unique thread C cprocess.
In Openflow message sink process, message-thread TH 3={ C 1, C 2..., C cadopt asynchronous Non-Blocking I/O model to receive Openflow switch SW={D from the reception buffer zone of link 1, D 2..., D dthe Openflow message that sends.According to Packet-in message trigger Packet-in event; According to Flow-Removed message trigger Flow-Removed event; According to Port-status message trigger Port-status event; According to Error message trigger Error event.
In Openflow message transmitting process, message-thread TH 3={ C 1, C 2..., C cadopt asynchronous Non-Blocking I/O model from message-thread TH 3={ C 1, C 2..., C cin under link transmission buffer zone in to Openflow switch SW={D 1, D 2..., D doutput controller-to-switch message.To link
Figure FDA00004302744600000516
the action need of transmission buffer zone carry out synchronously.
Step 2: the parallel processing of Openflow event
For Packet-in event and Flow-Removed event, first generate as the flow object FLOW of Base_flow structure in table 1 base_flow={ F 1, F 2..., F f, then according to start method in Base_flow structure, generate stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } , Finally by this task TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } Send to the local task queue Q of main thread in stream processing module 1 zin;
For stream Processing tasks TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In stream processing module 1, main thread A zwill TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } By the mode of poll, send to the local task queue of computational threads of stream processing module 1 in; Computational threads TH 1={ A 1, A 2..., A acarry out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate flow object subtask
Figure FDA0000430274460000056
described in inciting somebody to action
Figure FDA0000430274460000057
add to Q T H 1 = { Q 1 , Q 2 , . . . , Q a } In;
Computational threads TH 1={ A 1, A 2..., A acarry out TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } In specific tasks; And dynamically generate status object subtask according to described
Figure FDA00004302744600000510
the value of global attribute (as shown in table 2 the 4th row), judge; If global is true, represent that this state is overall shared state, by described
Figure FDA00004302744600000511
give state processing module 2, and the task of waiting status processing module 2 completes message STA 2-1; Otherwise, if global is not true, represent that this state is local shared state, flows described in the generation in processing module 1
Figure FDA00004302744600000512
thread directly carry out; Computational threads in stream processing module 1, by the mode of task stealing, is carried out load balancing.
For Port-status event and Error event, first generate for as the status object STATE of Base_state structure in table 2 base_state={ S 1, S 2..., S sprocessing tasks TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , Then by this task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } Send to the access task queue of state processing module 2 P state STATE Base _ state = { P 1 , P 2 , . . . , P s } In.
For task TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } And task TASK STATE Base _ state sub , In state processing module 2, state-thread TH 2={ B 1, B 2..., B bin B 1from
Figure FDA0000430274460000063
in extract and belong to B 1access task queue P state B 1 = { P 1 , P 2 , . . . , P s } ; Then B 1mode by poll is carried out P state B 1 = { P 1 , P 2 , . . . , P s } In task; When complete
Figure FDA0000430274460000066
after, sending to stream processing module 1 of task completes message
Figure FDA0000430274460000067
state-thread TH 2={ B 1, B 2..., B bin B 2from in extract and belong to B 2access task queue
Figure FDA0000430274460000069
then B 2mode by poll is carried out
Figure FDA00004302744600000610
in task; When complete
Figure FDA00004302744600000611
after, sending to stream processing module 1 of task completes message
Figure FDA00004302744600000612
state-thread TH 2={ B 1, B 2..., B bin B bfrom
Figure FDA00004302744600000613
in extract and belong to B baccess task queue P state B b = { P 1 , P 2 , . . . , P s } ; Then B bmode by poll is carried out P state B b = { P 1 , P 2 , . . . , P s } ; In task; When complete after, sending to stream processing module 1 of task completes message
6. the event method for parallel processing carrying out according to the event parallel controller based on Openflow claimed in claim 1, is characterized in that: along with increasing of number of threads has higher speed-up ratio.
CN201310647876.0A 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof Expired - Fee Related CN103677760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310647876.0A CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310647876.0A CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Publications (2)

Publication Number Publication Date
CN103677760A true CN103677760A (en) 2014-03-26
CN103677760B CN103677760B (en) 2015-12-02

Family

ID=50315439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310647876.0A Expired - Fee Related CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Country Status (1)

Country Link
CN (1) CN103677760B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156260A (en) * 2014-08-07 2014-11-19 北京航空航天大学 Concurrent queue access control method and system based on task eavesdropping
CN104660696A (en) * 2015-02-10 2015-05-27 上海创景计算机系统有限公司 Parallel transceiver building system and building method thereof
WO2016127582A1 (en) * 2015-02-13 2016-08-18 华为技术有限公司 Method and apparatus for defending against message attacks
CN109669724A (en) * 2018-11-26 2019-04-23 许昌许继软件技术有限公司 A kind of more order concurrent type frog service means for acting as agent and system based on linux system
CN110177146A (en) * 2019-05-28 2019-08-27 东信和平科技股份有限公司 A kind of non-obstruction Restful communication means, device and equipment based on asynchronous event driven
CN112380028A (en) * 2020-10-26 2021-02-19 上汽通用五菱汽车股份有限公司 Asynchronous non-blocking response type message processing method
CN116185662A (en) * 2023-02-14 2023-05-30 国家海洋环境预报中心 Asynchronous parallel I/O method based on NetCDF and non-blocking communication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5968160A (en) * 1990-09-07 1999-10-19 Hitachi, Ltd. Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory
US20130144973A1 (en) * 2011-12-01 2013-06-06 International Business Machines Corporation Method and system of network transfer adaptive optimization in large-scale parallel computing system
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5968160A (en) * 1990-09-07 1999-10-19 Hitachi, Ltd. Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory
US20130144973A1 (en) * 2011-12-01 2013-06-06 International Business Machines Corporation Method and system of network transfer adaptive optimization in large-scale parallel computing system
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SANDHYA NARAYAN等: "Hadoop Acceleration in and OpenFlow-based cluster", 《HIGH PERFORMANCE COMPUTING,NETWORKING STORAGE AND ANALYSIS(SCC),2012 SC COMPANION》 *
李博等: "多核环境下基于分组的自适应任务调度算法", 《2012全国高性能计算算术年会论文集》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156260A (en) * 2014-08-07 2014-11-19 北京航空航天大学 Concurrent queue access control method and system based on task eavesdropping
CN104156260B (en) * 2014-08-07 2017-03-15 北京航空航天大学 The concurrent queue accesses control system that a kind of task based access control is stolen
CN104660696A (en) * 2015-02-10 2015-05-27 上海创景计算机系统有限公司 Parallel transceiver building system and building method thereof
WO2016127582A1 (en) * 2015-02-13 2016-08-18 华为技术有限公司 Method and apparatus for defending against message attacks
US10536321B2 (en) 2015-02-13 2020-01-14 Huawei Technologies Co., Ltd. Message attack defense method and apparatus
CN109669724A (en) * 2018-11-26 2019-04-23 许昌许继软件技术有限公司 A kind of more order concurrent type frog service means for acting as agent and system based on linux system
CN110177146A (en) * 2019-05-28 2019-08-27 东信和平科技股份有限公司 A kind of non-obstruction Restful communication means, device and equipment based on asynchronous event driven
CN112380028A (en) * 2020-10-26 2021-02-19 上汽通用五菱汽车股份有限公司 Asynchronous non-blocking response type message processing method
CN116185662A (en) * 2023-02-14 2023-05-30 国家海洋环境预报中心 Asynchronous parallel I/O method based on NetCDF and non-blocking communication
CN116185662B (en) * 2023-02-14 2023-11-17 国家海洋环境预报中心 Asynchronous parallel I/O method based on NetCDF and non-blocking communication

Also Published As

Publication number Publication date
CN103677760B (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN103677760A (en) Event parallel controller based on Openflow and event parallel processing method thereof
Datla et al. Wireless distributed computing: a survey of research challenges
CN106850829B (en) A kind of micro services design method based on non-blocking communication
ATE245833T1 (en) DISTRIBUTED COMPUTING ENVIRONMENT WITH REAL-TIME SEQUENCE LOGIC AND TIME-DETERMINISTIC ARCHITECTURE
Gotoda et al. Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault
Chen et al. Robust consensus of nonlinear multiagent systems with switching topology and bounded noises
CN105302650A (en) Dynamic multi-resource equitable distribution method oriented to cloud computing environment
Pedarsani et al. Scheduling tasks with precedence constraints on multiple servers
CN112631800A (en) Kafka-oriented data transmission method and system, computer equipment and storage medium
Zhu et al. Petri net modeling and one-wafer scheduling of single-arm multi-cluster tools
Tian et al. Scheduling dependent coflows to minimize the total weighted job completion time in datacenters
CN104636206A (en) Optimization method and device for system performance
Lin et al. Multi-round real-time divisible load scheduling for clusters
Kondratyev et al. Concept of distributed processing system of images flow in terms of π-calculus
Pan et al. Efficient flow scheduling in distributed deep learning training with echelon formation
Zhang et al. Research on Delay Model of Deterministic Service Chain in the Industrial Internet
Banerjee et al. Contention-free many-to-many communication scheduling for high performance clusters
Boukala et al. Distributed verification of modular systems
Ye et al. Integrated real-time scheduling strategy based on Small-scale wireless sensor networks
Mische et al. Distributed memory on chip–bringing together low power and real-time
Pickartz et al. Swift: A transparent and flexible communication layer for pcie-coupled accelerators and (co-) processors
Peng et al. Improving Performance of Batch Point-to-Point Communications by Active Contention Reduction Through Congestion-Avoiding Message Scheduling
Lui et al. Scheduling in synchronous networks and the greedy algorithm
Zhang et al. The new method of liveness verification with Object-Oriented Timed Petri Nets
Yang et al. Optimal one-wafer cyclic scheduling analysis of transport-dominant single-arm multi-cluster tools

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210423

Address after: 100160, No. 4, building 12, No. 128, South Fourth Ring Road, Fengtai District, Beijing, China (1515-1516)

Patentee after: Kaixi (Beijing) Information Technology Co.,Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: BEIHANG University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151202

Termination date: 20211204