CN103677760B - A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof - Google Patents
A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof Download PDFInfo
- Publication number
- CN103677760B CN103677760B CN201310647876.0A CN201310647876A CN103677760B CN 103677760 B CN103677760 B CN 103677760B CN 201310647876 A CN201310647876 A CN 201310647876A CN 103677760 B CN103677760 B CN 103677760B
- Authority
- CN
- China
- Prior art keywords
- flow
- state
- message
- thread
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of event concurrency controller based on Openflow and event concurrency disposal route thereof, the process of the transmitting-receiving of Openflow message and Openflow event is separated by the method, utilizes extra computational threads to accelerate Openflow event handling.Application open after controller will set up and the linking of switch, and link is given fifty-fifty multiple I/O thread, each transmitting-receiving chaining message is by unique I/O thread process.Be applied in after receiving Openflow message, trigger corresponding Openflow event, and produce the Processing tasks to flow object and status object according to the type of event, transfer to different threads to process.In stream event handler procedure, dynamically can produce subtask, and be performed by multiple thread parallel.To shared state, unique state thread is used to process.The inventive method has better performance extensibility and simpler data access mode relative to the method for parallel processing of existing Openflow event.
Description
Technical field
The present invention relates to a kind of Openflow controller, refer to a kind of software defined network field Openflow controller and the method for parallel processing to Openflow controller internal event, particularly Openflow flows the method for parallel processing of event handler procedure inside.
Background technology
2008, OpenFlow technology was suggested first.Its thought is separated at the data retransmission in legacy network devices and route test two functional modules, utilizes centralized controller to be managed by standardized interface the various network equipment and configure.OpenFlow technology causes the extensive concern of industry, becomes technology very popular in recent years.Because Openflow technology is that network brings programmability flexibly, therefore this technology has been widely used among the multiple network such as campus network, wide area network, mobile network and data center network.
Disclosed in 31 days Dec in 2009 " OpenFlowSwitchSpecification ", OpenNetworkingFoundation organizes, and describes the type of OpenFlow message at this document Section 4.1.Openflow message includes controller-to-switch(and translates: the message that controller transmits to switch), Asynchronous(translates: asynchronous message) and Symmetric(translate: symmetric message).Wherein include Packet-in(in asynchronous message to translate: flow to and reach message), Flow-Removed(translates: drift except message), Port-status(translates: port status message) and Error(translate: error message).
Disclose in the Journal of Software on March 29th, 2013 " the SDN technology based on OpenFlow ", the people such as left high official position deliver.OpenFlow network is disclosed primarily of OpenFlow switch, controller two parts composition in literary composition.OpenFlow switch carrys out forwarding data bag according to stream table, represents data retransmission plane; Controller realizes management and control function by whole network view, and its steering logic represents control plane.The processing unit of each OpenFlow switch is formed by flowing table, and each stream table is made up of many stream list items, and stream list item then represents and forwards rule.The packet entering switch obtains corresponding operation by inquiry stream table.Controller safeguards the essential information of whole network by maintaining network view (networkview), as topology, network element and the service etc. that provides.Operate in application program on controller by calling the global data in network view, and then operation OpenFlow switch manages to whole network and controls.
The key that can the feature of Openflow technology make the treatment effeciency of Openflow controller end become network normally to run.The treatment effeciency of original single-threaded controller can not meet the processing demands of extensive Openflow network far away.Therefore, prior art utilizes multithreading, processes Openflow event concurrently in controller inside, improves the treatment effeciency of controller.
But existing Openflow event concurrency disposal route, utilizing many nuclear environments to larger, when the Openflow network of behavior complexity controls, there is the scalability problem of handling property: (1) stream event handler procedure does not support parallel work-flow, cannot meet the computation process that time complexity is high; (2) increase thread and be difficult to the treatment effeciency effectively improving stream event; (3), in Openflow event handler procedure, coupling is existed to the access of shared data, affects performance extensibility.
The present invention is directed to the problems referred to above, inner at Openflow controller, for Openflow event, especially Openflow flows event, proposes a kind of new event concurrency controller and event concurrency disposal route thereof.
Summary of the invention
An object of the present invention is to provide a kind of event concurrency controller based on Openflow, this controller utilizes many nuclear environments, under extensive Openflow network scenarios, Openflow message is received and dispatched with utilizing I/O thread parallel, the process of computational threads to Openflow event is utilized to accelerate, increase the parallel support of stream event handler procedure inside, strengthen the computing power of Openflow controller, improve performance extensibility.
Two of object of the present invention proposes a kind of event concurrency disposal route based on Openflow, and the method is received and dispatched Openflow message with using multiple thread parallel; After receiving Openflow message, trigger corresponding Openflow event; For stream event and the disposal route of correspondence thereof, generate the Processing tasks to this stream event, performed by stream-thread parallel; For the disposal route of other types event and correspondence thereof, generate the Processing tasks for shared state, performed by state-thread parallel; The processing procedure of stream event is inner, and can produce subtask dynamically, by the form of task stealing, multiple thread can process same stream event concurrently.
The present invention is a kind of event concurrency controller based on Openflow, and this controller includes stream processing module (1), status processing module (2) and Openflow message and distributes control module (3);
Openflow message distribution control module (3) first aspect employing asynchronous and unblock IO model receives the Openflow message that Openflow switch (4) sends from the reception buffer zone of link; Packet-in message, Flow-Removed message, Port-status message and Error message is included in described Openflow message.
Openflow message distributes control module (3) second aspect will flow Processing tasks
Be sent to the local task queue Q of main thread of stream processing module (1)
zin;
Described stream Processing tasks
Acquisition be: (A) is first according to Packet-in message trigger Packet-in event; Then the flow object FLOW of Base_flow structure is generated according to Packet-in event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Packet-in event according to start method in Base_flow structure
(B) first according to Flow-Removed message trigger Flow-Removed event; Then the flow object FLOW as Base_flow structure in table 1 is generated according to Flow-Removed event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Flow-Removed event according to start method in Base_flow structure
Openflow message distributes control module (3) third aspect by state processing tasks
Be sent to the access task queue of status processing module (2)
In;
Described state processing tasks
Acquisition be: (A) is first according to Port-status message trigger Port-status event; Then the status object STATE of Base_state structure is generated according to Port-status event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Port-status state processing tasks is designated as
(B) first according to Error message trigger Error event; Then the status object STATE for Base_state structure in such as table 2 is generated according to Error event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Error state processing tasks is designated as
Openflow message is distributed control module (3) fourth aspect and is received the controller-to-switch message flowing processing module (1) and export;
Openflow message is distributed control module (3) the 5th aspect and is adopted asynchronous and unblock IO model from message-thread TH
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in export controller-to-switch message to Openflow switch (4).
The stream Processing tasks that stream processing module (1) first aspect exports for receiving Openflow message distribution control module (3)
Stream processing module (1) second aspect will
Be saved in the local task queue Q of main thread
zin;
Stream processing module (1) third aspect will
The local task queue of computational threads is sent to by the mode of poll
in;
Stream processing module (1) fourth aspect performs
In specific tasks; And dynamically generate flow object FLOW
base_flow={ F
1, F
2..., F
fprocessing tasks, be designated as flow object subtask
described in inciting somebody to action
add to
In;
Stream processing module (1) the 5th aspect performs
In specific tasks; And dynamically generate status object STATE
base_state={ S
1, S
2..., S
sprocessing tasks, be designated as status object subtask
according to described
the value of global attribute, judge; If global is true, then represent that this state is overall shared state, then by described
give status processing module (2), and the task of waiting status processing module (2) completes message STA
2-1; Otherwise, if global is not true, then represent that this state is for local shared state, then flow described in the generation in processing module (1)
thread directly perform;
The task load that computational threads is carried out by the mode of task stealing in stream processing module (1) the 6th aspect is balanced.
Stream processing module (1) the 7th aspect exports controller-to-switch message to Openflow Message Distribution Module (3).The controller-to-switch message synchronization needing to export is written to message-thread TH by computational threads
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in.
Status processing module (2) first aspect receives the state processing tasks that Openflow message module (3) sends
And will
Be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Status processing module (2) second aspect receives the state processing tasks that stream processing module (1) sends
and will
be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Status processing module (2) third aspect state-thread TH
2={ B
1, B
2..., B
bin B
1from
in extract and belong to B
1access task queue
Then B
1performed by the mode of poll
In task; When complete
after, sending to stream processing module 1 of task completes message
State-thread TH
2={ B
1, B
2..., B
bin B
2from
in extract and belong to B
2access task queue
then B
2performed by the mode of poll
in task; When complete
after, sending to stream processing module (1) of task completes message
State-thread TH
2={ B
1, B
2..., B
bin B
bfrom
in extract and belong to B
baccess task queue
then B
bperformed by the mode of poll
in task; When complete
after, sending to stream processing module 1 of task completes message
To the task that stream processing module (1) sends, massage set is completed for status processing module (2) fourth aspect be designated as
The advantage that the present invention is directed to the event concurrency disposal route of Openflow controller is:
1. the present invention is inner at Openflow controller, event handling and information receiving and transmitting are separated from each other, by using computational threads, parallel processing is carried out to Openflow event, parallel support is increased in stream event handler procedure inside, effectively can strengthen the computing power of Openflow controller, improve the extensibility of controller handling property.
2. using state-thread of the present invention processes uniquely to shared state, and non-exclusive mode can be used to conduct interviews to shared data, simplifies the access to shared data in event handler procedure, improves access efficiency to a certain extent.
Accompanying drawing explanation
Fig. 1 is the structured flowchart of the event concurrency controller that the present invention is based on Openflow.
Fig. 2 is the parallel process schematic diagram of Openflow controller inside of the present invention.
Fig. 3 is the speed-up ratio comparison diagram based on switch program.
Fig. 4 is the speed-up ratio comparison diagram based on QPAS algorithm.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Shown in Figure 1, a kind of event concurrency controller based on Openflow of the present invention, this controller includes stream processing module 1, status processing module 2 and Openflow message and distributes control module 3; Conveniently hereafter state, controller of the present invention is referred to as POCA.POCA of the present invention and existing Openflow controller with the use of, and to be embedded in Openflow network architecture.
In the present invention, POCA translates the Packet-in(in Openflow message: flow to and reach message), Flow-Removed(translates: drift except message), Port-status(translates: port status message) translate with Error(: error message) process that associates.
In the present invention, Packet-in(translates: flow to and reach message) corresponding event is called Packet-in event; Flow-Removed(translates: drift except message) corresponding event is called Flow-Removed event; Port-status(translates: port status message) corresponding event is called Port-status event; Error(translates: error message) corresponding event is called Error event.
In the present invention, the flow object FLOW of Base_flow structure
base_flow={ F
1, F
2..., F
fmiddle F
1represent the flow object of the first type, F
2represent the flow object of the second type, F
frepresent the flow object of last type, f is the identification number of flow object, for convenience of description, hereafter by F
falso referred to as the flow object of any one type.
In the present invention, the status object STATE of Base_state structure
base_state={ S
1, S
2..., S
smiddle S
1represent the status object of the first type, S
2represent the status object of the second type, S
srepresent the status object of last type, s represents the identification number of status object, for convenience of description, hereafter by S
salso referred to as the status object of any one type.Any status object by unique access task queue should be had, then the status object S of the first type
1corresponding access task queue is designated as P
1(referred to as first access task queue P
1), the status object S of the second type
2corresponding access task queue is designated as P
2(referred to as second access task queue P
2), the status object S of last type
scorresponding access task queue is designated as P
s(referred to as last access task queue P
s).Access task queue employing set expression-form is
In the present invention, the thread flowed in processing module 1 is designated as stream-thread TH
1={ A
1, A
2..., A
z..., A
amiddle A
1represent first thread in stream processing module 1, A
2represent second thread in stream processing module 1, A
zrepresent z thread in stream processing module 1, A
arepresent last thread in stream processing module 1, a represents the thread ID in stream processing module 1, for convenience of description, hereafter by A
aalso referred to as any one thread.As stream-thread TH
1={ A
1, A
2..., A
z..., A
ain some thread A
zafter main thread, then all the other threads are computational threads TH
1={ A
1, A
2..., A
a.Any thread by unique local task queue should be had, then first thread A
1corresponding local task queue is designated as Q
1(referred to as first local task queue Q
1), second thread A
2corresponding local task queue is designated as Q
2(referred to as second local task queue Q
2), z thread A
zcorresponding local task queue is designated as Q
z(referred to as z local task queue Q
z, also referred to as the local task queue Q of main thread
z), last thread A
acorresponding local task queue is designated as Q
a(referred to as last local task queue Q
a).Computational threads TH
1={ A
1, A
2..., A
acorresponding to local task queue set be designated as
In the present invention, the thread in status processing module 2 is designated as state-thread TH
2={ B
1, B
2..., B
bmiddle B
1represent first thread in status processing module 2, B
2represent second thread in status processing module 2, B
brepresent last thread in status processing module 2, b represents the thread ID in status processing module 2, for convenience of description, hereafter by B
balso referred to as any one thread.For the status object STATE in status processing module 2
base_state={ S
1, S
2..., S
sthat mean allocation is at state-thread TH
2={ B
1, B
2..., B
bon, then any one state-thread B
bon will process multiple access task queue, by described state-thread B
bthe access task queue of process is designated as
And
Any access task queue P
swill to there being a unique thread B
b.
In the present invention, the thread that Openflow message is distributed in control module 3 is designated as message-thread TH
3={ C
1, C
2..., C
cmiddle C
1represent that Openflow message distributes first thread in control module 3, C
2represent that Openflow message distributes second thread in control module 3, C
crepresent that Openflow message distributes last thread in control module 3, c represents that Openflow message distributes the thread ID in control module 3, for convenience of description, hereafter by C
calso referred to as any one thread.
Openflow message is distributed control module 3 and is designated as with multiple linking of Openflow switch 4
sV represents Openflow controller, and SW represents the set of Openflow switch, SW={D
1, D
2..., D
dmiddle D
1represent first Openflow switch, D
2represent second Openflow switch, D
drepresent last Openflow switch, d represents the identification number of Openflow switch, for convenience of description, hereafter by D
dalso referred to as any one Openflow switch.Article 1, link is designated as
article 2 link is designated as
the last item link is designated as
(also referred to as bar link one by one arbitrarily
),
Linking between control module 3 with Openflow switch 4 is distributed for Openflow message
that mean allocation is at message-thread TH
3={ C
1, C
2..., C
con, and bar link one by one arbitrarily
will to there being a unique thread C
c.
(1) Openflow message distributes control module 3
Shown in Figure 1, Openflow message is distributed control module 3 first aspect and is adopted asynchronous and unblock IO model from the reception buffer zone of link, receive the Openflow message of Openflow switch 4 transmission;
Packet-in message, Flow-Removed message, Port-status message and Error message is included in described Openflow message.
Openflow message distributes control module 3 second aspect will flow Processing tasks
Be sent to the local task queue Q of main thread of stream processing module 1
zin;
In the present invention, described stream Processing tasks
Acquisition be: (A) is first according to Packet-in message trigger Packet-in event; Then the flow object FLOW as Base_flow structure in table 1 is generated according to Packet-in event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Packet-in event according to start method in Base_flow structure
(B) first according to Flow-Removed message trigger Flow-Removed event; Then the flow object FLOW as Base_flow structure in table 1 is generated according to Flow-Removed event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Flow-Removed event according to start method in Base_flow structure
In the present invention, the flow object FLOW of Base_flow structure
base_flow={ F
1, F
2..., F
fmiddle F
1represent the flow object of the first type, F
2represent the flow object of the second type, F
frepresent the flow object of last type, f is the identification number of flow object, for convenience of description, hereafter by F
falso referred to as the flow object of any one type.
Table 1Base_flow class
Openflow message distributes control module 3 third aspect by state processing tasks
Be sent to the access task queue of status processing module 2
In;
In the present invention, described state processing tasks
Acquisition be: (A) is first according to Port-status message trigger Port-status event; Then the status object STATE for Base_state structure in such as table 2 is generated according to Port-status event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Port-status state processing tasks is designated as
(B) first according to Error message trigger Error event; Then the status object STATE for Base_state structure in such as table 2 is generated according to Error event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Error state processing tasks is designated as
Table 2Base_state class
Openflow message distributes the controller-to-switch message that control module 3 fourth aspect receives the output of stream processing module 1;
Openflow message is distributed control module 3 the 5th aspect and is adopted asynchronous and unblock IO model from message-thread TH
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in export controller-to-switch message to Openflow switch 4.
(2) processing module 1 is flowed
Shown in Figure 1, stream processing module 1 first aspect distributes the stream Processing tasks of control module 3 output for receiving Openflow message
Second aspect will
Be saved in the local task queue Q of main thread
zin;
The third aspect will
The local task queue of computational threads of stream processing module 1 is sent to by the mode of poll
in;
Fourth aspect performs
In specific tasks; And dynamically generate flow object FLOW
base_flow={ F
1, F
2..., F
fprocessing tasks, be designated as flow object subtask
described in inciting somebody to action
add to
In;
5th aspect performs
In specific tasks; And dynamically generate status object STATE
base_state={ S
1, S
2..., S
sprocessing tasks, be designated as status object subtask
according to described
the value of global attribute (as shown in table 2 the 4th row), judge; If global is true, then represent that this state is overall shared state, then by described
give status processing module 2, and the task of waiting status processing module 2 completes message STA
2-1; Otherwise, if global is not true, then represent that this state is for local shared state, then flow described in the generation in processing module 1
thread directly perform;
Computational threads in 6th aspect stream processing module 1, by the mode of task stealing, carries out load balancing.
The relevant public information of described " task stealing mode ":
Article name: Schedulingmultithreadedcomputationsbyworkstealing
Author: RobertD.BlumofeUniv.ofTexasatAustin, Austin;
CharlesE.LeisersonMITLabforComputerScience,Cambridge,MA;
Deliver information: JournaloftheACM (JACM) JACMHomepagearchive
Volume46Issue5,Sept.1999,Pages720-748,ACMNewYork,NY,USA。
In the present invention, when stream processing module 1 receive status processing module 2 export task complete message
afterwards, first judge whether that there is computational threads waits for this message, if existed, then order waits for the arrival of news
Computational threads continue perform; Otherwise, ignore this message.
Stream processing module 1 the 7th aspect exports controller-to-switch message to Openflow Message Distribution Module 3.In the present invention, the controller-to-switch message synchronization needing to export is written to message-thread TH by computational threads
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in.
(3) status processing module 2
Shown in Figure 1, status processing module 2 first aspect receives the state processing tasks that Openflow message module 3 sends
And will
Be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Status processing module 2 second aspect receives the state processing tasks that stream processing module 1 sends
and will
be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Status processing module 2 third aspect state-thread TH
2={ B
1, B
2..., B
bin B
1from
in extract and belong to B
1access task queue
Then B
1performed by the mode of poll
In task; When complete
after, sending to stream processing module 1 of task completes message
State-thread TH
2={ B
1, B
2..., B
bin B
2from
in extract and belong to B
2access task queue
then B
2performed by the mode of poll
in task; When complete
after, sending to stream processing module 1 of task completes message
State-thread TH
2={ B
1, B
2..., B
bin B
bfrom
in extract and belong to B
baccess task queue
then B
bperformed by the mode of poll
in task; When complete
after, sending to stream processing module 1 of task completes message
To the task that stream processing module 1 sends, massage set is completed for status processing module 2 fourth aspect be designated as
Shown in Figure 2, adopt the event concurrency controller based on Openflow of the present invention's design to carry out the parallel processing of Openflow event, it comprises following parallel processing step:
Step one: the parallel transmitting-receiving of Openflow message, triggers corresponding Openflow event
In the link based on switch existence anduniquess each in the event concurrency controller of Openflow of the present invention.
Carry out in link process of establishing, message-thread TH
3={ C
1, C
2..., C
cin first thread C
1be responsible for Openflow switch SW={D
1, D
2..., D
dlinking request monitor.After receiving linking request, establish the link
And will link
Distribute to message-thread TH fifty-fifty
3.Bar link one by one arbitrarily
by a unique thread C
cprocess.
In Openflow message sink process, message-thread TH
3={ C
1, C
2..., C
casynchronous and unblock IO model is adopted to receive Openflow switch SW={D from the reception buffer zone of link
1, D
2..., D
dthe Openflow message that sends.According to Packet-in message trigger Packet-in event; According to Flow-Removed message trigger Flow-Removed event; According to Port-status message trigger Port-status event; According to Error message trigger Error event.
In Openflow message transmitting process, message-thread TH
3={ C
1, C
2..., C
cadopt asynchronous and unblock IO model from message-thread TH
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in Openflow switch SW={D
1, D
2..., D
dexport controller-to-switch message.To link
the action need of transmission buffer zone carry out synchronously.
In the present invention, the parallel transmitting-receiving of Openflow message is received and dispatched Openflow message with make use of multiple message-thread parallel, and each Openflow switch link is processed by unique message-thread, there is not mutual exclusion between message-thread.Therefore, Openflow information receiving and transmitting efficiency can be improved to greatest extent.In addition, utilize asynchronous and unblock IO model can reduce the interference of messaging procedure and message processing procedure, thus further increase information receiving and transmitting efficiency.
Step 2: the parallel processing of Openflow event
For Packet-in event and Flow-Removed event, first generate the flow object FLOW as Base_flow structure in table 1
base_flow={ F
1, F
2..., F
f, then generate stream Processing tasks according to start method in Base_flow structure
Finally by this task
Be sent to the local task queue Q of main thread in stream processing module 1
zin;
For stream Processing tasks
In stream processing module 1, main thread A
zwill
The local task queue of computational threads of stream processing module 1 is sent to by the mode of poll
in; Computational threads TH
1={ A
1, A
2..., A
aperform
In specific tasks; And dynamically generate flow object subtask
described in inciting somebody to action
add to
In;
Computational threads TH
1={ A
1, A
2..., A
aperform
In specific tasks; And dynamically generate status object subtask
according to described
the value of global attribute (as shown in table 2 the 4th row), judge; If global is true, then represent that this state is overall shared state, then by described
give status processing module 2, and the task of waiting status processing module 2 completes message STA
2-1; Otherwise, if global is not true, then represent that this state is for local shared state, then flow described in the generation in processing module 1
thread directly perform; Computational threads in stream processing module 1, by the mode of task stealing, carries out load balancing.
For Port-status event and Error event, first generate the status object STATE for Base_state structure in such as table 2
base_state={ S
1, S
2..., S
sprocessing tasks
Then by this task
Be sent to the access task queue of status processing module 2
In.
For task
And task
In status processing module 2, state-thread TH
2={ B
1, B
2..., B
bin B
1from
in extract and belong to B
1access task queue
Then B
1performed by the mode of poll
In task; When complete
after, sending to stream processing module 1 of task completes message
state-thread TH
2={ B
1, B
2..., B
bin B
2from
in extract and belong to B
2access task queue
then B
2performed by the mode of poll
in task; When complete
after, sending to stream processing module 1 of task completes message
state-thread TH
2={ B
1, B
2..., B
bin B
bfrom
in extract and belong to B
baccess task queue
Then B
bperformed by the mode of poll
In task; When complete
after, sending to stream processing module 1 of task completes message
In the present invention, after receiving Openflow message, trigger corresponding Openflow event, and produce the Processing tasks to flow object and status object according to the type of event, transfer to different threads to carry out parallel processing.In stream event handler procedure, dynamically can produce subtask, utilize the mode of task stealing to be processed concurrently by multiple computational threads, improve stream event handling efficiency.In controller, each shared state is processed by unique state thread, not only simplify the access to shared state, and can improve the treatment effeciency of computational threads convection current event to a certain extent.Utilize the event concurrency disposal route in the present invention that Openflow controller can be made to have better performance extensibility.
checking embodiment
Based on the event concurrency disposal system POCA of Openflow message, when the switch program that execution calculated amount is little, compare other Openflow controllers, along with increasing of number of threads, when number of threads is greater than 8, POCA has higher speed-up ratio.Reason be each IO thread process separately belonging to switch link, there is not interference each other, therefore improve handling property.Speed-up ratio comparison diagram based on switch program shown in Figure 3.
Speed-up ratio comparison diagram based on QPAS algorithm shown in Figure 4.When the QPAS algorithm that computational processing is larger, along with increasing of number of threads, compared with NOX, POCA has higher speed-up ratio.Reason is: POCA carries out parallel accelerate in event handler procedure inside by computational threads, improves the treatment effeciency of each event, and then improves overall treatment effeciency.
The invention discloses a kind of event concurrency controller based on Openflow and event concurrency disposal route thereof, the process of the transmitting-receiving of Openflow message and Openflow event is separated by the method, utilizes extra computational threads to accelerate Openflow event handling.Application open after controller will set up and the linking of switch, and link is given fifty-fifty multiple I/O thread, each transmitting-receiving chaining message is by unique I/O thread process.Be applied in after receiving Openflow message, trigger corresponding Openflow event, and produce the Processing tasks to flow object and status object according to the type of event, transfer to different threads to process.In stream event handler procedure, dynamically can produce subtask, and be performed by multiple thread parallel.To shared state, unique state thread is used to process.The inventive method has better performance extensibility and simpler data access mode relative to the method for parallel processing of existing Openflow event.
Claims (4)
1. based on an event concurrency controller of Openflow, it is characterized in that: this controller includes stream processing module (1), status processing module (2) and Openflow message and distributes control module (3);
Openflow message distribution control module (3) first aspect employing asynchronous and unblock IO model receives the Openflow message that Openflow switch (4) sends from the reception buffer zone of link; Packet-in message, Flow-Removed message, Port-status message and Error message is included in described Openflow message;
Packet-in message refers to flow to and reaches message;
Flow-Removed message refers to drifts except message;
Port-status message refers to port status message;
Error message refers to error message;
Openflow message distributes control module (3) second aspect will flow Processing tasks
Be sent to the local task queue Q of main thread of stream processing module (1)
zin;
represent the stream Processing tasks that Packet-in event is corresponding;
represent the stream Processing tasks that Flow-Removed event is corresponding;
Described stream Processing tasks
Acquisition be: (A) is first according to Packet-in message trigger Packet-in event; Then the flow object FLOW of Base_flow structure is generated according to Packet-in event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Packet-in event according to start method in Base_flow structure
(B) first according to Flow-Removed message trigger Flow-Removed event; Then the flow object FLOW as Base_flow structure in table 1 is generated according to Flow-Removed event
base_flow={ F
1, F
2..., F
f; Finally generate stream Processing tasks corresponding to described Flow-Removed event according to start method in Base_flow structure
The flow object FLOW of Base_flow structure
base_flow={ F
1, F
2..., F
fmiddle F
1represent the flow object of the first type, F
2represent the flow object of the second type, F
frepresent the flow object of last type, f is the identification number of flow object;
Table 1Base_flow class
Openflow message distributes control module (3) third aspect by state processing tasks
Be sent to the access task queue of status processing module (2)
In;
represent Port-status state processing tasks;
represent Error state processing tasks;
Described state processing tasks
Acquisition be: (A) is first according to Port-status message trigger Port-status event; Then the status object STATE of Base_state structure is generated according to Port-status event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Port-status state processing tasks is designated as
(B) first according to Error message trigger Error event; Then the status object STATE for Base_state structure in such as table 2 is generated according to Error event
base_state={ S
1, S
2..., S
sprocessing tasks, namely Error state processing tasks is designated as
The status object STATE of Base_state structure
base_state={ S
1, S
2..., S
smiddle S
1represent the status object of the first type, S
2represent the status object of the second type, S
srepresent the status object of last type, s represents the identification number of status object;
Table 2Base_state class
Openflow message is distributed control module (3) fourth aspect and is received the controller-to-switch message flowing processing module (1) and export;
Openflow message is distributed control module (3) the 5th aspect and is adopted asynchronous and unblock IO model from message-thread TH
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in export controller-to-switch message to Openflow switch (4);
Message-thread TH
3={ C
1, C
2..., C
cmiddle C
1represent that Openflow message distributes first thread in control module (3), C
2represent that Openflow message distributes second thread in control module (3), C
crepresent that Openflow message distributes last thread in control module (3), c represents that Openflow message distributes the thread ID in control module (3);
The stream Processing tasks that stream processing module (1) first aspect exports for receiving Openflow message distribution control module (3)
Stream processing module (1) second aspect will
Be saved in the local task queue Q of main thread
zin;
Stream processing module (1) third aspect will
The local task queue of computational threads is sent to by the mode of poll
in;
Computational threads TH
1={ A
1, A
2..., A
acorresponding to local task queue set be designated as
stream-thread TH
1={ A
1, A
2..., A
z..., A
amiddle A
1represent first thread in stream processing module (1), A
2represent second thread in stream processing module (1), A
zrepresent z thread in stream processing module (1), A
arepresent last thread in stream processing module (1), a represents the thread ID in stream processing module (1); First thread A
1corresponding local task queue is designated as Q
1, second thread A
2corresponding local task queue is designated as Q
2, z thread A
zcorresponding local task queue is designated as Q
z, last thread A
acorresponding local task queue is designated as Q
a;
Stream processing module (1) fourth aspect performs
In specific tasks; And dynamically generate flow object FLOW
base_flow={ F
1, F
2..., F
fprocessing tasks, be designated as flow object subtask
described in inciting somebody to action
add to
In;
Stream processing module (1) the 5th aspect performs
In specific tasks; And dynamically generate status object STATE
base_state={ S
1, S
2..., S
sprocessing tasks, be designated as status object subtask
according to described
the value of global attribute, judge; If global is true, then represent that this state is overall shared state, then by described
give status processing module (2), and the task of waiting status processing module (2) completes message STA
2-1; Otherwise, if global is not true, then represent that this state is for local shared state, then flow described in the generation in processing module (1)
thread directly perform;
The status object STATE of Base_state structure
base_state={ S
1, S
2..., S
smiddle S
1represent the status object of the first type, S
2represent the status object of the second type, S
srepresent the status object of last type, s represents the identification number of status object;
The task load that computational threads is carried out by the mode of task stealing in stream processing module (1) the 6th aspect is balanced;
Stream processing module (1) the 7th aspect exports controller-to-switch message to Openflow Message Distribution Module (3); The controller-to-switch message synchronization needing to export is written to message-thread TH by computational threads
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in;
TH
3={ C
1, C
2..., C
cmiddle C
1represent that Openflow message distributes first thread in control module (3), C
2represent that Openflow message distributes second thread in control module (3), C
crepresent that Openflow message distributes last thread in control module (3), c represents that Openflow message distributes the thread ID in control module (3);
Status processing module (2) first aspect receives the state processing tasks that Openflow message module (3) sends
And will
Be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Access task queue
in the status object S of the first type
1corresponding access task queue is designated as P
1, the status object S of the second type
2corresponding access task queue is designated as P
2, the status object S of last type
scorresponding access task queue is designated as P
s;
Status processing module (2) second aspect receives the state processing tasks that stream processing module (1) sends
and will
be saved in status object STATE
base_state={ S
1, S
2..., S
saccess task queue
in;
Status processing module (2) third aspect state-thread TH
2={ B
1, B
2..., B
bin B
1from
in extract and belong to B
1access task queue
And
Then B
1performed by the mode of poll
in task; When complete
after, sending to stream processing module (1) of task completes message
State-thread TH
2={ B
1, B
2..., B
bmiddle B
1represent first thread in status processing module (2), B
2represent second thread in status processing module (2), B
brepresent last thread in status processing module (2), b represents the thread ID in status processing module (2);
State-thread TH
2={ B
1, B
2..., B
bin B
2from
in extract and belong to B
2access task queue
And
Then B
2performed by the mode of poll
in task; When complete
after, sending to stream processing module (1) of task completes message
State-thread TH
2={ B
1, B
2..., B
bin B
bfrom
in extract and belong to B
baccess task queue
And
Then B
bperformed by the mode of poll
in task; When complete
after, sending to stream processing module (1) of task completes message
To the task that stream processing module (1) sends, massage set is completed for status processing module (2) fourth aspect be designated as
2. the event concurrency controller based on Openflow according to claim 1, is characterized in that: this controller and existing Openflow controller with the use of, and to be embedded in Openflow network architecture.
3., according to event concurrency disposal route of carrying out based on the event concurrency controller of Openflow according to claim 1, it is characterized in that there is the following step:
Step one: the parallel transmitting-receiving of Openflow message, triggers corresponding Openflow event
Based on the link of switch existence anduniquess each in the event concurrency controller of Openflow;
Carry out in link process of establishing, message-thread TH
3={ C
1, C
2..., C
cin first thread C
1be responsible for Openflow switch SW={D
1, D
2..., D
dlinking request monitor; After receiving linking request, establish the link
And will link
distribute to message-thread TH fifty-fifty
3; Bar link one by one arbitrarily
by a unique thread C
cprocess;
In Openflow message sink process, message-thread TH
3={ C
1, C
2..., C
casynchronous and unblock IO model is adopted to receive Openflow switch SW={D from the reception buffer zone of link
1, D
2..., D
dthe Openflow message that sends; According to Packet-in message trigger Packet-in event; According to Flow-Removed message trigger Flow-Removed event; According to Port-status message trigger Port-status event; According to Error message trigger Error event;
In Openflow message transmitting process, message-thread TH
3={ C
1, C
2..., C
cadopt asynchronous and unblock IO model from message-thread TH
3={ C
1, C
2..., C
cin belonging to link
transmission buffer zone in Openflow switch SW={D
1, D
2..., D
dexport controller-to-switch message; To link
the action need of transmission buffer zone carry out synchronously;
Step 2: the parallel processing of Openflow event
For Packet-in event and Flow-Removed event, first generate the flow object FLOW as Base_flow structure in table 1
base_flow={ F
1, F
2..., F
f, then generate stream Processing tasks according to start method in Base_flow structure
Finally by this task
Be sent to the local task queue Q of main thread in stream processing module (1)
zin;
For stream Processing tasks
In stream processing module (1), main thread A
zwill
The local task queue of computational threads of stream processing module (1) is sent to by the mode of poll
in; Computational threads TH
1={ A
1, A
2..., A
aperform
In specific tasks; And dynamically generate flow object subtask
described in inciting somebody to action
add to
In;
Computational threads TH
1={ A
1, A
2..., A
aperform
In specific tasks; And dynamically generate status object subtask
according to described
the value of global attribute, judge; If global is true, then represent that this state is overall shared state, then by described
give status processing module (2), and the task of waiting status processing module (2) completes message STA
2-1; Otherwise, if global is not true, then represent that this state is for local shared state, then flow described in the generation in processing module (1)
thread directly perform; Computational threads in stream processing module (1), by the mode of task stealing, carries out load balancing;
For Port-status event and Error event, first generate the status object STATE for Base_state structure in such as table 2
base_state={ S
1, S
2..., S
sprocessing tasks
Then by this task
Be sent to the access task queue of status processing module (2)
In;
For task
And task
in status processing module (2), state-thread TH
2={ B
1, B
2..., B
bin B
1from
in extract and belong to B
1access task queue
Then B
1performed by the mode of poll
In task; When complete
after, sending to stream processing module (1) of task completes message
state-thread TH
2={ B
1, B
2..., B
bin B
2from
in extract and belong to B
2access task queue
then B
2performed by the mode of poll
in task; When complete
after, sending to stream processing module (1) of task completes message
state-thread TH
2={ B
1, B
2..., B
bin B
bfrom
in extract and belong to B
baccess task queue
Then B
bperformed by the mode of poll
In task; When complete
after, sending to stream processing module (1) of task completes message
4. according to event concurrency disposal route of carrying out based on the event concurrency controller of Openflow according to claim 1, it is characterized in that: along with increasing of number of threads has higher speed-up ratio.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310647876.0A CN103677760B (en) | 2013-12-04 | 2013-12-04 | A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310647876.0A CN103677760B (en) | 2013-12-04 | 2013-12-04 | A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103677760A CN103677760A (en) | 2014-03-26 |
CN103677760B true CN103677760B (en) | 2015-12-02 |
Family
ID=50315439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310647876.0A Expired - Fee Related CN103677760B (en) | 2013-12-04 | 2013-12-04 | A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103677760B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156260B (en) * | 2014-08-07 | 2017-03-15 | 北京航空航天大学 | The concurrent queue accesses control system that a kind of task based access control is stolen |
CN104660696B (en) * | 2015-02-10 | 2018-04-27 | 上海创景信息科技有限公司 | Parallel transmitting-receiving structure system and its construction method |
CN105991588B (en) * | 2015-02-13 | 2019-05-28 | 华为技术有限公司 | A kind of method and device for defending message attack |
CN109669724B (en) * | 2018-11-26 | 2021-04-06 | 许昌许继软件技术有限公司 | Multi-command concurrent proxy service method and system based on Linux system |
CN110177146A (en) * | 2019-05-28 | 2019-08-27 | 东信和平科技股份有限公司 | A kind of non-obstruction Restful communication means, device and equipment based on asynchronous event driven |
CN112380028A (en) * | 2020-10-26 | 2021-02-19 | 上汽通用五菱汽车股份有限公司 | Asynchronous non-blocking response type message processing method |
CN116185662B (en) * | 2023-02-14 | 2023-11-17 | 国家海洋环境预报中心 | Asynchronous parallel I/O method based on NetCDF and non-blocking communication |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5968160A (en) * | 1990-09-07 | 1999-10-19 | Hitachi, Ltd. | Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory |
CN103401777A (en) * | 2013-08-21 | 2013-11-20 | 中国人民解放军国防科学技术大学 | Parallel search method and system of Openflow |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139265B (en) * | 2011-12-01 | 2016-06-08 | 国际商业机器公司 | Network adaptation transmitter optimization method in massive parallel processing and system |
-
2013
- 2013-12-04 CN CN201310647876.0A patent/CN103677760B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5968160A (en) * | 1990-09-07 | 1999-10-19 | Hitachi, Ltd. | Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory |
CN103401777A (en) * | 2013-08-21 | 2013-11-20 | 中国人民解放军国防科学技术大学 | Parallel search method and system of Openflow |
Non-Patent Citations (2)
Title |
---|
Hadoop Acceleration in and OpenFlow-based cluster;Sandhya Narayan等;《High Performance Computing,Networking Storage and Analysis(SCC),2012 SC Companion》;20121116;第535-538页 * |
多核环境下基于分组的自适应任务调度算法;李博等;《2012全国高性能计算算术年会论文集》;20131112;第1-4页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103677760A (en) | 2014-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103677760B (en) | A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof | |
Agrawal et al. | Simulation of network on chip for 3D router architecture | |
CN110717664B (en) | CPS production system for service-oriented production process based on mobile edge calculation | |
Li et al. | GSPN-based reliability-aware performance evaluation of IoT services | |
Kobzan et al. | Utilizing blockchain technology in industrial manufacturing with the help of network simulation | |
CN111193971B (en) | Machine learning-oriented distributed computing interconnection network system and communication method | |
CN105224550B (en) | Distributed stream computing system and method | |
Garanina et al. | Distributed Termination Detection by Counting Agent. | |
CN104954439B (en) | A kind of Cloud Server and its node interconnected method, cloud server system | |
CN108696390A (en) | A kind of software-defined network safety equipment and method | |
CN101604270A (en) | ARINC429 communication redundancy method based on vxworks operating system | |
Bhardwaj | A new fault tolerant routing algorithm for IASEN-2 | |
CN105893321A (en) | Path diversity-based crossbar switch fine-grit fault-tolerant module in network on chip and method | |
CN104636206A (en) | Optimization method and device for system performance | |
CN107332789B (en) | Communication method of full-asynchronous artificial neuron network based on click controller | |
Lui et al. | Scheduling in synchronous networks and the greedy algorithm | |
CN109587087A (en) | A kind of message processing method and system | |
Qin et al. | Design and application of fog computing model based on big data | |
Zhang et al. | Research on delay model of deterministic service chain in the industrial Internet | |
CN202364262U (en) | Peer-to-peer cloud network device based on optical packet switching | |
CN105681214B (en) | A kind of large scale network transmission optimization method and system | |
Park et al. | DEVS peer-to-peer protocol for distributed and parallel simulation of hierarchical and decomposable DEVS models | |
Zhang et al. | Large scale symmetric network malfunction detection | |
Yoshida et al. | Proposal an Automated Management Service for Hybrid Meeting Spaces Using Uni-Messe and IoT | |
Wei et al. | Research on model validation of routing protocol for UAV swarm cooperative disaster relief system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210423 Address after: 100160, No. 4, building 12, No. 128, South Fourth Ring Road, Fengtai District, Beijing, China (1515-1516) Patentee after: Kaixi (Beijing) Information Technology Co.,Ltd. Address before: 100191 Haidian District, Xueyuan Road, No. 37, Patentee before: BEIHANG University |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151202 Termination date: 20211204 |