CN106453084B - A kind of blended data central site network stream scheduling method based on congestion coefficient - Google Patents

A kind of blended data central site network stream scheduling method based on congestion coefficient Download PDF

Info

Publication number
CN106453084B
CN106453084B CN201611047098.1A CN201611047098A CN106453084B CN 106453084 B CN106453084 B CN 106453084B CN 201611047098 A CN201611047098 A CN 201611047098A CN 106453084 B CN106453084 B CN 106453084B
Authority
CN
China
Prior art keywords
stream
path
frame
congestion
pod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611047098.1A
Other languages
Chinese (zh)
Other versions
CN106453084A (en
Inventor
郭得科
罗来龙
任棒棒
苑博
刘云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Publication of CN106453084A publication Critical patent/CN106453084A/en
Application granted granted Critical
Publication of CN106453084B publication Critical patent/CN106453084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a kind of blended data central site network stream scheduling method based on congestion coefficient, including:Determine linking in wired Fat Tree structures in blended data central site network system between multiple frames and wireless 2D Torus structures, and the route between any frame pair;The data flow a plurality of to be implanted of Batch Arrival is obtained, and is that every bulk stream specifies the minimum transmission path of a congestion coefficient sum;The data flow a plurality of to be implanted that sequence is reached persistently is obtained, and is that every sequence flows specify the minimum transmission path of a congestion coefficient increment sum.By the present invention in that being split as bulk stream and data flow with by the data flow of arrival, the technological means that bulk stream dispatches data flow again is first dispatched, further improve the network performance of mixed structure.

Description

A kind of blended data central site network stream scheduling method based on congestion coefficient
Technical field
The present invention relates to mixed communication field, especially, it is related to a kind of blended data central site network based on congestion coefficient Stream scheduling method.
Background technology
Data center is the infrastructure of application on site and basic sex service.Thousands of server and interchanger pass through Data center network (DCN, data center network) interconnects.And current data center network includes that two is big main Want school, i.e. cable data center and wireless data hub.The networking of cable data central interior server and interchanger is relied on In wire link, such as twisted-pair feeder, optical fiber.Fat-Tree and VL2 just belong to this class;Networking master inside wireless data hub To be realized by wireless communication link, otherwise it is wireless network by frame interconnection, otherwise Servers-all and interchanger are connected It is full Wireless network structure to be connected into.
There is natural defect in cable data central site network.First, otherwise cable data center be excessive oversubscription i.e. Enable and maintain good network performance substantial amounts of cost;It is exactly excessively to conform to the principle of simplicity to come reduces cost but it cannot be guaranteed that preferably Network performance.Secondly, existing data center and its difficult and complexity are extended.Again, cable data center needs substantial amounts of connecing Line and maintenance cost.Finally, large-scale cable data center generally uses sandwich construction.Caused result is two and belongs to different The server of frame, even if physically distance is very near also must could realize communication using upper strata link.
For cable data Center Extender high cost in the prior art and the problem of very flexible, not yet there is effective at present Solution.
The content of the invention
In view of this, it is an object of the invention to propose a kind of blended data central site network stream scheduling based on congestion coefficient Method, can be set up on the premise of the existing device at cable data center and layout is not changed do not need any control across machine Frame wireless connection, significantly extends cable data center under small cost, and lifts network flexibility, while improving internetworking Energy.
Based on above-mentioned purpose, the technical scheme that the present invention is provided is as follows:
According to an aspect of the invention, there is provided a kind of blended data central site network stream dispatching party based on congestion coefficient Method, including:
Determine wired Fat-Tree structures and wireless 2D-Torus between multiple frames in blended data central site network system Route between the link in structure, and any frame pair;
The data flow a plurality of to be implanted of Batch Arrival is obtained, and is that every bulk stream specifies a congestion coefficient sum minimum Transmission path;
The data flow a plurality of to be implanted that sequence is reached persistently is obtained, and is that every sequence flows specify a congestion coefficient increment The minimum transmission path of sum.
Wherein, it is that the transmission path that every bulk stream specifies a congestion coefficient sum minimum includes:
Every bulk stream is specified successively, searches for its chain in by wired Fat-Tree structures and wireless 2D-Torus structures Connect all path candidates of composition;
The each congestion coefficient of link is calculated respectively, and each time is calculated according to the congestion coefficient of each link The congestion coefficient in routing footpath;
Choose transmission path of the minimum path candidate of congestion coefficient as given batch size stream.
Also, every bulk stream is only transmitted on a path candidate, and the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum linked using this, path candidate Congestion coefficient represent by the stream quantity maximum of this paths.
Also, search for it and link all times for constituting in by wired Fat-Tree structures and wireless 2D-Torus structures Routing footpath, including k2/ 4 finite paths, a wireless path and a mixed paths, wherein k are wireless 2D-Torus structures In pod layer numbers.
Meanwhile, it is that the transmission path that every sequence flows specify a congestion coefficient increment sum minimum includes:
The new stream for reaching and the old stream for needing to retransmit are got, the new stream of arrival is needed to adjust with the old stream positioning for needing to retransmit The sequence flows of degree;
Wired Fat-Tree structures are updated with every congestion coefficient for linking in wireless 2D-Torus structures;
Every sequence flows are specified successively, search for its chain in by wired Fat-Tree structures and wireless 2D-Torus structures Connect all path candidates of composition;
The each congestion ratio of link is calculated respectively, and each candidate road is calculated according to the congestion ratio of each link The congestion ratio in footpath, wherein, congestion ratio is the increment of congestion coefficient;
Choose transmission path of the minimum path candidate of congestion ratio as given batch size stream.
Also, every sequence flows are only transmitted on a path candidate, and the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum linked using this, path candidate Congestion coefficient represent by the stream quantity maximum of this paths.
In addition, the route in determining blended data central site network system between any frame pair includes:
K pod is set in multiple frames, and puts each frame under in pod k pod layers of formation;
Pod layers of logic chart is identified according to pod Layer assignments and built for each frame;
Calculate path P aths of the pod to pod where target frame on pod layers of logic chart where initial framehp
Select PathhpIn each wireless connection so that PathhpRadio connection paths it is most short, and obtain in frame On path P athht
Addition polymerization layer switch and the wired connection addition Path that will be neededhtIn, obtain route Pathh
Also, mark includes mark prefix and mark suffix, is that each frame is identified according to pod Layer assignments and builds pod Layer logic chart includes:
For any x ∈ [0, k], k/2 is randomly selected without mark prefix frame, the mark prefix that will be selected frame is put It is x, and ensures that the mark prefix of the adjacent frame of any two is different;
For each frame sets mark suffix, mark suffix span is 0 to k/2-1, any two mark prefix phase The mark suffix of same frame is different;
Calculate the pod layers of connectedness of logic chart under current identification;
Repeat above-mentioned steps repeatedly, choose connective maximum mark allocative decision and generated as pod layers of logic chart As a result.
From the above it can be seen that the technical scheme that the present invention is provided is by using setting up wireless 2D-Torus structures group Net is coupled with existing wired Fat-Tree rack constructions and makes mixed structure as a technological means for overall work, Across the frame wireless connection for not needing any control is set up on the premise of not changing the existing device at cable data center and being laid out, Cable data center is significantly extended under small cost, and lifts network flexibility;Meanwhile, split using the data flow that will be reached It is bulk stream and data flow, first dispatches the technological means that bulk stream dispatches data flow again, further improves the net of mixed structure Network performance.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment The accompanying drawing for needing to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the invention Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also obtain according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention Flow chart;
Fig. 2 is annexation and transmission of wireless signals in the prior art in wireless data hub network system between frame Schematic diagram;
Fig. 3 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure for just being received to the signal transceiver of light source;
Fig. 4 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure that side receives to the signal transceiver of light source;
Fig. 5 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure received back to the signal transceiver of light source;
Fig. 6 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure that side receives to the signal transceiver of light source;
Fig. 7 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the frame top level view of blended data central site network system;
Fig. 8 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the frame layer logic chart of VLCcube;
Fig. 9 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the pod layers of logic chart of VLCcube;
Figure 10 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, VLCcube compares figure with the average path length-k value columns of Fat-Tree;
Figure 11 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, VLCcube compares figure with the network total bandwidth-k value columns of Fat-Tree;
Figure 12 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, VLCcube compares figure with the pod layer connectivitys measurement-k value columns of Fat-Tree;
Figure 13 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, VLCcube compares figure with the routing algorithm complexity metric-k value columns of Fat-Tree;
Figure 14 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-k value columns of the VLCcube with Fat-Tree under Trace flows compares figure;
Figure 15 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-k value columns of the VLCcube with Fat-Tree under Trace flows compares figure;
Figure 16 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-k value columns of the VLCcube with Fat-Tree under Stride-2k flows compares figure;
Figure 17 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-k value columns of the VLCcube with Fat-Tree under Stride-2k flows compares figure;
Figure 18 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-stream size columns of the VLCcube with Fat-Tree under Stride-2k flows compares figure;
Figure 19 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-stream size columns of the VLCcube with Fat-Tree under Stride-2k flows compares figure;
Figure 20 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-k value columns of the VLCcube with Fat-Tree under stochastic-flow compares figure;
Figure 21 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-k value columns of the VLCcube with Fat-Tree under stochastic-flow compares figure;
Figure 22 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-stream size columns of the VLCcube with Fat-Tree under stochastic-flow compares figure;
Figure 23 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-stream size columns of the VLCcube with Fat-Tree under stochastic-flow compares figure;
Figure 24 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity-k value columns of the ECMP with SBF under mass flow compares figure;
Figure 25 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss-k value columns of the ECMP with SBF under mass flow compares figure;
Figure 26 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the handling capacity-k value columns of ECMP-2, ECMP-4, SOF-2 with SOF-4 under sequence flow compare figure;
Figure 27 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the packet loss-k value columns of ECMP-2, ECMP-4, SOF-2 with SOF-4 under sequence flow compare figure.
Specific embodiment
To make the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is further carried out it is clear, complete, describe in detail, it is clear that it is described Embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area The every other embodiment that those of ordinary skill is obtained, belongs to the scope of protection of the invention.
Have the cost in cable data central process and increase network flexibility to reduce extension, some are directed to frame The wireless data hub network structure of aspect is suggested.As shown in Fig. 2 the frame in network is connected to one by Radio Link Rise, the signal between different frames is reflexed on target machine by accurately adjusting launch angle by mirror surface.Especially Ground, 60GHz technology for radio frequency and free-space communication technology (FSO, Free-Space-Optical) technology are used to build Wireless link between vertical frame.These similar schemes can improve the bandwidth at cable data center and reduce packet delay.In addition, nothing Line link can be by dynamic restructuring meeting the demand of current communication mode.
Wireless data hub method for designing is directed generally to the reconfigurability of wireless link between existing frame.However, but Ignore in design object it is rear both.First, worked the periphery that must upgrade or rebuild data with existing center completely Environment.For example, the design of use 60GHz and the FSO communication technology before must decorate available data centre ceiling Transmitted with realizing the over the horizon of signal as a mirror surface.In addition, in order to realize reconfigurability, it is necessary to use specific light Equipment.For example, ceiling minute surface, convex lens/concavees lens etc..More fatal, when wireless link is configured, they are often needed Optical network device and infrastructure are carried out frequently and the control operation of complexity.
According to one embodiment of present invention, there is provided a kind of blended data central site network stream scheduling based on congestion coefficient Method.
As shown in figure 1, the scheduling of the blended data central site network stream based on congestion coefficient for providing according to embodiments of the present invention Method includes:
Step S101, determine wired Fat-Tree structures in blended data central site network system between multiple frames with it is wireless Route between the link in 2D-Torus structures, and any frame pair;
Step S103, obtains the data flow a plurality of to be implanted of Batch Arrival, and is that every bulk stream specifies a congestion system The minimum transmission path of number sum;
Step S105, persistently obtains the data flow a plurality of to be implanted that sequence is reached, and be that specified one of every sequence flows are gathered around The minimum transmission path of plug coefficient increment sum.
Wherein, it is that the transmission path that every bulk stream specifies a congestion coefficient sum minimum includes:
Every bulk stream is specified successively, searches for its chain in by wired Fat-Tree structures and wireless 2D-Torus structures Connect all path candidates of composition;
The each congestion coefficient of link is calculated respectively, and each time is calculated according to the congestion coefficient of each link The congestion coefficient in routing footpath;
Choose transmission path of the minimum path candidate of congestion coefficient as given batch size stream.
Also, every bulk stream is only transmitted on a path candidate, and the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum linked using this, path candidate Congestion coefficient represent by the stream quantity maximum of this paths.
Also, search for it and link all times for constituting in by wired Fat-Tree structures and wireless 2D-Torus structures Routing footpath, including k2/ 4 finite paths, a wireless path and a mixed paths, wherein k are wireless 2D-Torus structures In pod layer numbers.
Meanwhile, it is that the transmission path that every sequence flows specify a congestion coefficient increment sum minimum includes:
The new stream for reaching and the old stream for needing to retransmit are got, the new stream of arrival is needed to adjust with the old stream positioning for needing to retransmit The sequence flows of degree;
Wired Fat-Tree structures are updated with every congestion coefficient for linking in wireless 2D-Torus structures;
Every sequence flows are specified successively, search for its chain in by wired Fat-Tree structures and wireless 2D-Torus structures Connect all path candidates of composition;
The each congestion ratio of link is calculated respectively, and each candidate road is calculated according to the congestion ratio of each link The congestion ratio in footpath, wherein, congestion ratio is the increment of congestion coefficient;
Choose transmission path of the minimum path candidate of congestion ratio as given batch size stream.
Also, every sequence flows are only transmitted on a path candidate, and the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum linked using this, path candidate Congestion coefficient represent by the stream quantity maximum of this paths.
In addition, the route in determining blended data central site network system between any frame pair includes:
K pod is set in multiple frames, and puts each frame under in pod k pod layers of formation;
Pod layers of logic chart is identified according to pod Layer assignments and built for each frame;
Calculate path P aths of the pod to pod where target frame on pod layers of logic chart where initial framehp
Select PathhpIn each wireless connection so that PathhpRadio connection paths it is most short, and obtain in frame On path P athht
Addition polymerization layer switch and the wired connection addition Path that will be neededhtIn, obtain route Pathh
Also, mark includes mark prefix and mark suffix, is that each frame is identified according to pod Layer assignments and builds pod Layer logic chart includes:
For any x ∈ [0, k], k/2 is randomly selected without mark prefix frame, the mark prefix that will be selected frame is put It is x, and ensures that the mark prefix of the adjacent frame of any two is different;
For each frame sets mark suffix, mark suffix span is 0 to k/2-1, any two mark prefix phase The mark suffix of same frame is different;
Calculate the pod layers of connectedness of logic chart under current identification;
Repeat above-mentioned steps repeatedly, choose connective maximum mark allocative decision and generated as pod layers of logic chart As a result.
Technical scheme is expanded on further below according to specific embodiment VLCcube.
VLCcube be it is proposed that a kind of representational cable data central site network structure Fat-Tree enhancing knot Structure.Institute's organic frame is linked networking and is tied as wireless Torus by VLCcube with VLC (Visible Light Communication) Structure, and form the coupled structure of wired Fat-Tree and wireless Torus.
The VLC communication technologys are modulation LED (Lighting Emitting Diodes) or LD (Laser Diodes) hairs The visible ray that goes out realizes signal transmission.The VLC communication technologys use OOK (On-Off Keying) modulation scheme, that is, receive light Signal just represents logic 1, is not received by just representing logical zero.
In terms of data transfer rate, during using high frequency LED light source, the monochromatic light VLC communication technologys can realize the data transfer rate of 3Gbps; And when using three coloured light, data transfer rate will be extended to 10Gbps.If using LD, single 450nm laser beams just can be realized The data transfer rate of 9Gbps.The data transfer rate of the VLC communication technologys is entirely capable of the data transportation requirements of competent data center.
In terms of transmission range, the VLC communication technologys based on LED light source can realize 10Gbps bandwidth in the range of 10 meters, this The communication task in data center between adjacent rack has been undertaken enough.One project extension VLC communication technology of entitled Rojia Communication distance to 1.4 kms, although data transfer rate is limited.In addition, the VLC communication technologys based on LD can realize (km over long distances Rank) high rate communication, because laser has good linear property.Can be used for for the LED-based VLC communication technologys counting by we According to short haul connection in center, and the VLC communication technologys of LD are based on as long haul communication means.
In terms of availability, the VLC communication equipments of full duplex formula, i.e. transceiver have successfully been developed and to external-open Releasing is sold.One development platform of entitled MOMO can for developer provide application of the exploitation based on the VLC communication technologys API and SDK kits.Such as, the VLC communication technologys can be merged seamlessly with Internet of Things, there is provided indoor position location services etc..Separately Outward, PureLiFi can for developer provide rapid configuration and test based on LED facilities can be by optic communication related application.
In sum, the VLC communication technologys can be used the communication service in data center network, and will not bring volume Outer wiring cost, it is not required that being made to data with existing central hardware environment big must change.
Typically, each frame top can configure several VLC transceivers so that frame interconnection is turned into specific nothing Line topological structure.A frame R is given, when there are multiple neighbours simultaneously to its sending signal, when the transceiver on R tops can connect These signals are received, if can not but efficiently differentiating out, interference just occurs, causes R to be correctly decoded the signal for receiving.
We assess the interference when being incorporated into data center using the optical simulation software TracePro70 of specialty Situation.In a frame, we place four transceivers of orthogonal direction, are followed successively by T1, T2, T3 and T4.We allow a branch of LED visible light is sent from 3 meters with export-oriented T1, each transceiver is then characterized with the Illumination Distribution figure of each transceiver and is received Optical signal number.If T2, T3 and T4 can be received if enough visible rays, then prove that they are received significantly Interference.
Shown in Fig. 3 to Fig. 6 is successively T1、T2、T3、T4Observed result.It will be apparent that T1 captures most of light Signal, and concentrate on the middle position of transceiver in the optical signal for capturing.Due to the scattering in visible ray communication process, some The off-center position of light, thus non-central part can also perceive some illumination.And on the contrary, other three transceivers can only Few optical signal is received, because only that few several positions can perceive the normalization illumination of 0.001 unit;Especially, T3 does not almost receive optical signal.Its reason is the dead astern that T3 is located at T1, and light is difficult to bypass T1 arrival T3.Therefore, to T1 Interference of the optical signal of transmitting to other 3 transceivers is very limited.And it is quite reasonable to place 4 transceivers on frame top , the interference problem brought is very little.There is this observation conclusion as support, VLC technologies are introduced data center by we And VLCcube is devised, each frame places the orthogonal transceiver in 4 directions.
Inside data center, each server is connected and access network with the frame topcross of frame where it. For typical cable data center, these frames are all the interchanger and two networkings of link into a layering knot by upper strata Structure, and non-immediate interconnection networking.Therefore, we are conceived to and for the frame in cable data center to pass through the direct networking of wireless link Into special topological structure.Herein, its frame networking is nothing by taking the Fat-Tree being the most widely used instantly as an example by we Line Torus structures.By this way, we just construct mixed structure VLCcube, and seamlessly by wired Fat-Tree structures Blended with wireless Torus structures.
Fig. 7 is illustrated that the frame top level view of VLCcube wireless portions, as shown in fig. 7, the institute in Fat-Tree is organic Frame is the 2 wireless Torus structures of dimension all with VLC networkings.Had in the Torus structures and m frame is had per a line, and each row are then There is n frame.On each frame top, 4 visible optical transceivers are configured as towards four orthogonal directions, to keep away as far as possible Exempt from interference each other.It should be noted that the wireline side of VLCcube maintains Fat-Tree structures constant, Wo Mensuo Do be by institute organic frame VLC link networkings be Torus.The port number of each interchanger is represented with k, and k is even number.With Fat-Tree is the same, and VLCcube has k pod, and each pod includes k/2 frame topcross and k/2 aggregation switch.Cause This, in general, the wireless Torus of VLCcube has altogether and is related to k2/ 2 frame topcross.
In order to ensure the performance of VLCcube, 2 dimension Torus must be well-designed.Having two large problems needs to solve so as to abundant Play the advantage of wireless link:The setting of m and n and the Placement Problems of frame.In 2 dimension Torus, the interchanger of each dimension A circle is all connected to become, so the network diameter of the Torus is (m+n)/2.So VLCcube needs to set suitable m and n To minimize network diameter.In addition, the quantity of 2 dimension Torus medium-long range links is also m+n, but the number of these remote linkages It is limited according to rate, minimizing m+n can improve the total bandwidth of network.For the consideration of this respect, we are also required to minimize m+ n.As for frame Placement Problems, or because the path length between any two frame in Fat-Tree is 2 jumps, or it is 4 Jump, in order to minimize the network diameter of VLCcube, those must be separated by the VLC wireless links of introducing the frame of 4 jumps as far as possible Directly link.
If 2 dimension Torus need to accommodate k2/ 2 frames, then parameter m and n must are fulfilled for:
m*(n-1)<k2/2≤m*n
In VLCcube, the parameter configuration of optimization isAnd the value of n then depends on k2/2.Such as Really (m-1)2<k2/ 2≤m* (m-1), n take (m-1);Otherwise, if m* (m-1)<k2/2≤m2, then the value of n as m, i.e.,
The value of m and n needs to minimize m+n.AndTherefore, when and Only as m=n, m+n reaches minimum value.Inequality (m-1) is considered again2<k2/ 2≤m* (m-1), just can obtain m, n and k tri- Relation between person.
Next it is exactly the Placement Problems for considering frame for the m and n that give.In Fat-Tree, if a pair of frame Belong to same pod, then the path length between them is 2 jumps, otherwise, it is necessary to 4 jump.VLCcube is jumped between frame 4 as far as possible Wireline pathway shorten to 1 jump wireless path.That is, the VLC connections in VLCcube are necessarily used for interconnection, and those are not belonging to together One frame of pod.
In order to clearly illustrate frame Placement Strategy, we introduce the concept of the mark of frame first.Fig. 8 is illustrated that one Frame mark figure in individual data center, in VLCcube, each frame has unique mark.The mark is by two parts group Into, prefix and suffix.The span of prefix is that 0 to k, expression is which pod is the frames belong to.And suffix is then arrived 0 Value between k/2, expression is numbering of the frame inside pod.For example, during what mark 51 was represented is the 6th pod 2nd frame.
We also introduce pod layers of logic chart, as shown in Figure 9.Pod layers of logic chart regards each pod as a node, if If there are one or more of wireless links between two pod, one is added between respective nodes in pod layers of logic chart Side.In the middle of the VLCcube shown in Fig. 8 and Fig. 9, k=6, m=5, n=4.According to defined above, can derive corresponding Pod layers of logic chart.It is weighed with the quantity on side in pod layers of logic chart herein connective.In pod layers of logic chart of example, 6 nodes and 15 sides are had, a complete graph has been constituted.Therefore, the value of k is given, the pod layers of sum on logic chart side is less In k* (k-1)/2.
There is the value of defined above and given k, m and n, we devise three steps to construct the wireless Torus of 2 dimensions.Just As shown in Fig. 8 and Fig. 9, it is possible to which the Torus of acquisition is not Torus complete on stricti jurise.
The first step, allocation identification prefix.For any x ∈ [0, k], we randomly select k/2 frame, and by they Prefix be set to x.Each prefix is allocated k/2 times because there is k/2 frame in each pod.This step needs what is met Unique constraint is that any frame all can not possess identical prefix with any one in its four neighbours.In the event of punching It is prominent, the step for repeat in all prefixes are all assigned to figure.
Second step, calculates mark suffix.In frame layer logic chart, each frame has a suffix by itself and other Frame in same pod is distinguished.And the span of suffix is 0 to k/2.
3rd step, improves the pod layers of connectedness of logic chart.We calculate each by repeating two above step repeatedly The connectedness of the pod layers of logic chart that execution is obtained simultaneously chooses the allocative decision of connective maximum as final result.
We further demonstrate that above step can derive correct legal VLCcube.
When k >=4, above step can obtain a feasible VLCcube structural scheme, and each pod in frame layer Occur k/2 times in logic chart.
In the first step, we ensure that distributing k/2 time in frame layer logic chart by each pod, and each VLC Link can only interconnect two different pod.If if each frame is caught a kind of color in frame layer logic chart, this is of equal value Can be by k kind color dyes in proof frame layer logic chart.In fact, the frame layer logic chart of VLCcube is 4- regular graphs, also It is to talk about chromatic number for 4.4 kinds of colors just can successfully dye the figure, therefore, when k >=4, the possible configuration scheme one of VLCcube It is fixed to exist.
Meanwhile, the pod layers of logic chart of VLCcube must be connection.Otherwise, it is assumed that there are VLC links can not be reached If pod, the performance of VLCcube will not ensure that.
Meanwhile, the pod layers of logic chart of the VLCcube that three above step is obtained is connection.
It is worth noting that, frame layer view is one 2 dimension Torus structure, either complete Torus or incomplete Torus, it is all a connected graph.That is, giving any frame xiyi, it can find one and reach any purpose frame xjyjPath, when on the map paths to pod layers of logic chart, just find one from podxiTo podxjPath.Cause This, the pod layers of logic chart of VLCcube is connection.
Above-mentioned demonstration ensure that the reasonability of VLCcube building methods.3rd step is then selected optimal and carried by repeating The pod layers of connectedness of logic chart is risen.The theoretical foundation of do so is, after being performed a plurality of times, more likely obtains more excellent solution. We can verify its specific effect in follow-up experiment.
As viewed from the perspective of topology design, VLCcube is integrated with the topological property of Fat-Tree and Torus, including extension Property, constant degree, multipath and fault-tolerance.In addition, VLCcube tends to have the characteristic (visible light communication in deployment and plug and play Equipment is once placed, and follow-up adjustment and control is no longer needed in use).It is also to be noted that VLCcube realities The wireless networking of frame aspect is showed, any change has not been made to existing Fat-Tree structures and computer room surrounding environment.
For any pair frame, between them and wireline pathway, wireless path and wire and wireless mixed path are deposited. In the present embodiment, we focus on the compounded link routing algorithm of design VLCcube.In order to minimize network congestion, we VLCcube network congestion coefficients are modeled, and is proposed the stream of congestion aware for mass flow and sequence flow respectively Dispatching algorithm.
Give any pair frame, mixed path PathhIn both contain expired air, also contains wireless link. That is, when mixed logic dynamic algorithm is designed, it is necessary to consider the topological property of Fat-Tree and Torus.According to VLCcube The feature of itself, we devise it is a kind of from top and under mixed logic dynamic algorithm.Assuming that source frame and destination frame difference It is xiyiAnd xjyj, we obtain in pod layers of logic chart from podx firstiTo podxjPath, then by the path of pod aspects Embody frame aspect., it is necessary to selected rational VLC links during materialization.Finally, the wired chain being involved in Connect and be added in the middle of path.
First, the path from source pod to purpose pod in pod layers of logic chart, Path are calculatedhp.This step is fairly simple, Because there was only k node in pod layers of logic chart.
Then, Path is selectedhpIn each wireless link, that is, computer rack aspect path P athht.Cause To there may be a plurality of optional wireless link between a pair of pod, and select the different wireless links to cause different links Length.Therefore, in PathhpIn each jump should all select those to cause the wireless link of shortest path.Shown in Fig. 9 In VLCcube, source frame is set to 11 by us, and destination frame is set to 41.In pod layers of logic chart, pod 1 and pod 4 It is neighbours.And in frame layer logic chart, there are three optional links directly to interconnect pod 1 and pod 4, i.e., WithIf fromIf, frame 11 needs to be transferred to 10, and 40 need to forward number According to 41, caused result is that pod 1 and pod 4 is required for a polymerization layer switch as relaying.If however, choosingOrA polymerization layer switch is then only needed to respectively as relaying, i.e., 11 to 12 and 11 to 10 One is needed to jump relaying.So,5 jump path lengths can be caused, andOrOnly need to 4 Jump.
Finally, required expired air is added to path P athhtIn.This step is necessary poly- to being added in path Close layer switch.In each pod, polymerization layer switch and frame topcross constitute Complete Bipartite Graph, therefore, in the pod Any polymerization layer switch can serve as the relaying between any two frame topcross.So, in this step, Wo Mensui Machine chooses required polymerization layer switch.
Three steps more than, can calculate the most short mixed path between any two frame.The time of the first step answers Miscellaneous degree is O (k2), the time complexity of second step and the 3rd step is O (0).Therefore, the time complexity of the routing algorithm is O (k2).It is worth noting that, what k represented is switch ports themselves quantity (often below 100), so O (k2) complexity can be Receive.
We introduce Radio Link to lift its network performance at cable data center, and Main Means are to use institute's organic frame Wireless link networking is Torus structures.In order to give full play to the effect of VLC links and minimize network delay, it is proposed that Scheduling model with optimize batch and sequence flow under network congestion coefficient.In Torus, because each frame is mounted with 4 Transceiver so that any frame can communicate simultaneously with its 4 neighbours simultaneously.Herein, we are firstly introduced into necessary concept And definition.
We represent a data center network with G (V, E), and wherein V and E represents node set and line set respectively.Separately Outward, F=(f1,f2,…fδ) represent the δ bar streams being injected into network.For each stream fi=(si,di,bi), si, diAnd biPoint The source interchanger of the stream, purpose end switch and required bandwidth are not represented.φ represent it is a kind of can be in the scheduling of Successful transmissions F Scheme.
Define 1:Given F and φ, the congestion coefficient of any one link e is defined as in network
Wherein, t (e) and c (e) represent the capacity (bandwidth) of the flow and link e by linking e respectively.ArbitrarilyAll in [0,1] interval range, especially, when without any stream by the link, its congestion coefficient is 0, and is worked as When the link is fully used, its congestion coefficient is 1.
Define 2:The congestion coefficient of one paths P is
There is this to define, we can lock in path to obtain congested node, and after judging whether certain paths can be competent at Continuous flow transformation task.
Define 3:SBF problems (scheduling batched flows) are data-oriented central site network G (V, E) and need The flow F of transmission, the target of bulk stream scheduling is to find a kind of rational path allocation scheme φ*So thatMinimize.
SBF problems are modeled herein as follows:
Minimize Z
bmin/cmax≤Z≤bmax/cmin
In above-mentioned model, i is an integer between 0 and δ, and in (v) and out (v) is represented in VLCcube respectively The set of the flow for flowing in and out on node v, tfWhat is represented is the size for flowing f.First three formula in model ensure that Each stream is only transmitted on one path;4th formula determines the bound of object function Z, wherein bmaxAnd bminRespectively What is represented is the maximin for flowing size, correspondingly, cmaxAnd cminRepresent respectively link capacity in VLCcube it is maximum most Small value;5th formula determines every upper bound of link congestion coefficient can not be more than Z, wherein,Characterize whether e is by stream fiAccount for With, if it is,Value is 1, otherwise value is 0.
SBF problems are a typical integral linear programming problems, are also np hard problem, it is impossible to asked in polynomial time again Must solve.Therefore, designing a kind of algorithm of lightweight herein to solve feasible solution.For any fi∈ F, we search out Three kinds of paths present in VLCcube, and be designated asIn fact,Comprising k2/ 4 wireline pathways, and wirelessly One each with mixed path.In order to solve the stream scheduling scheme of F, a kind of heuristic calculation based on congestion coefficient is devised herein Method.
Define 4:Given flow set F, every stream fiAll calculate its set of pathsThe congestion coefficient of e ∈ E, note Make le, the summation of the number of path of the link is passed through in the path candidate for being all streams in F.
Define 5:For free routingIts congestion coefficient, is denoted as lP, it is that all-links are gathered around on the path Fill in the sum of coefficient, i.e. lP=∑e∈P le
Substantially, the congestion coefficient of link e and path P can characterize the probability that it is taken by a plurality of stream.Therefore, we are by lP As judging f in heuritic approachiWhether the Main Basiss of P should be taken.Specifically, for fi, heuritic approach is from its institute There is path candidateThe middle minimum path of congestion coefficient of choosing is used as its transmission path.
According to the definition of congestion coefficient, algorithm 1 gives the basic thought of the greedy algorithm of our designs.For each Stream, we search out it firstBar path candidate, then calculates each congestion coefficient of link in VLCcube. Afterwards, return on every stream, for any stream fi, calculate the congestion coefficient of its all path candidate.FromIn select and gather around Plug coefficient reckling is used as fiTransmission path.This algorithm is O the time complexity in the path candidate stage of all streams is searched for (δ*(k2+ k+4)), it is O (δ * (k in the time complexity of screening pipeline stage2/4+2)).So, in general, the algorithm Time complexity is O (δ * k2)。
The congestion coefficient representative for linking e is that at most have leBar stream is linked using this;Similarly, the congestion system of path P Number means at most there is lPBar stream passes through the path.If not taking scheduling strategy, free routingThere is equalization Probability by fiChoose as its transmission path.According to this observation, theorem 4,5,6 is demonstrated with path congestion coefficient as greedy Center algorithm screens the correctness of foundation.
Theorem 4:In VLCcube, given stream fi∈ F, e are any one links in network, then e is by fiWhat is used is general Rate is:
Wherein, FeThe set expressed possibility using the stream of link e,It is by flowing f on link eiAnd the congestion coefficient for causing, Because there may be more than one fiPath candidate by the link situation.
Prove:For arbitrary stream fi∈Fe, ifInBar path candidate by e, thenOtherwise, fiWould be impossible to use link e.
Theorem 5:In VLCcube, for arbitrary stream fi∈Fe, η is represented in scheduling scheme Φ, by the stream for linking e Quantity, then have:
Prove:Whether given flow set F, these streams are separate using link e, therefore,WithCan just calculate, and it is correspondingCan also calculate.
Theorem 6:Consider a stream f in Fi, η represented by pathStream quantity, and E (P) is path The set of the link composition on P, for any P, has:
Prove:ForIn a paths P, η=0 to represent and use the path, and η=1 item without any one stream Represent the path by fiUsing or P at least one link by except fiOutside other streams use.It is consequently possible to calculate going outAndProbability.
According to the conclusion of theorem 4, theorem 5,6 calculate do not flow or only one stream using link e and path P it is general Rate.It should be noted that when η >=2, link e and path P are likely to occur congestion, because the transmission time of previous bar stream may Later stream long can be caused overtime and packet loss.Theorem 5 shows with theorem 6, leValue it is bigger, be more possible to occur two with On stream use link e or the situation (get over be possible to congestion occur) of path P.Therefore, path P occur the probability of congestion with lPValue be directly proportional.So, just it is able to as the correctness of the foundation in screening path using the congestion coefficient in path in algorithm 1 Prove.Due to the path transmission flow that the greedy algorithm selection congestion coefficient for proposing is minimum, the network congestion rate of VLCcube will Greatly reduce.
However, the flow inside data center is not necessarily all batch, in fact, flow is typically sequence reaching and in dynamic State.Φ0The stream scheduling strategy that expression is presently in existence, FNRepresent newly arrived stream, FORepresent because network reason needs to retransmit Old stream.Accordingly, it would be desirable to the flow set of scheduling is combined into F1=FN+FO.With F1As input, our defined nucleotide sequence stream scheduling problems It is as follows:
Define 6:SOF (scheduling online flows) problem, the i.e. target of dynamic stream scheduling problem is to obtain newly Scheduling scheme Φ1So that the increment for transmitting the congestion ratio that new flow brings is minimum.Allow Δ Z=Z1-Z0, whereinAndTarget seeks to minimize Δ Z.
SOF problems can be triggered when new stream is reached or some existing streams need and retransmit.SOF problems are equally One integral linear programming problem similar to SBF, for the consideration of length, there is omitted herein the particular content of its model.
SOF problems need to minimize Z1, so the strategy in algorithm 2 can be equally used for solving the problem, however, due to SOF problems may often be triggered, and cause the algorithm to be run multiple times and produce huge computing cost.Therefore, we are only Consider those in F1In stream, and propose that a kind of greedy algorithm solves the problems, such as SOF.For F1In any one stream, we Basic idea is to allow it to use that link for making existing network increase minimum congestion ratio as its transmission path.
As shown in algorithm 2, our Greedy strategy calculates the flow that those needs include scheduling scope first, that is, Say, it is necessary to differentiate those streams for having completed, the stream of newly arrived stream and bust this.The algorithm must be known by it is current which set It is available during standby and link, it is therefore desirable to update the state of whole network.Afterwards, for F1In stream, this algorithm need search Go out every three kinds of path candidates of stream.Calculating fiRear routing footpath after, algorithm can be according to biAnd ciValue calculate every The congestion ratio of path candidate.Then fromIn select congestion ratio reckling as fiTransmission path.As all F1In stream All it is assigned to after a paths, algorithm can return to the solving result S of SOF problemsonline.Due to F1In a total δ1Bar stream, with Upper process will be performed δ1It is secondary, and be required for spending O (k each time2+ k) time be used for search for stream path candidate, therefore, The time complexity of the algorithm is O (δ1*k2)。
Theorem 7:For sequence flow, algorithm 2 is better than ECMP dispatching methods.
Prove:For F1In any stream fi, when using ECMP dispatching methods, fiCongestion be contemplated to be:
And conversely, during using algorithm 2, fiCongestion be contemplated to be:
Obviously, caused by algorithm 2 congestion not over congestion caused by ECMP dispatching methods, theorem must be demonstrate,proved.
The effect to sort method of the present invention is evaluated below.
We realize VLCcube and Fat-Tree with specialized network simulation software NS3 (Network Simulator). The value of given k, can obtain Fat-Tree structures, and VLCcube can then be obtained according to building method given above. The bandwidth of wired connection and short range wireless link in VLCcube is set to 10Gbps, and the bandwidth quilt of long range wireless link It is defined to 100Mbps.Re-transmission time (RTO, retransmission timeout) in network is fixed as 2 seconds.More than being based on Parameter setting, we compare the quality of the two topological aspect first, then compare wireline pathway, wireless path and mixed path three Plant the time complexity of routing algorithm.Finally, we focus on to weigh the network performance of the two.
Our experiment considers three kinds of different flow rate modes:1) Trace flows:What Yahoo data centers were recorded Flow;2):Stride-i flows:Server in network marked as x sends data to the server marked as (x+i) mod N Bag, wherein N is the total quantity of server in network;3):Random flows:Every source for flowing and destination are all randomly selected 's.Network throughput and packet loss are used to weigh performance of the network under different flow pattern.
In order to verify it is proposed that dispatching algorithm performance, we compare VLCcube first and Fat-Tree is used Network performance during ECMP dispatching methods, the stream dispatching algorithm that ECMP and congestion aware are then compared again gives VLCcube internetworkings The influence that can be brought.It should be noted that the arrival time of sequence flow obeys Poisson distribution.
In order to compare the quality of VLCcube and Fat-Tree in topological aspect, we measure two kinds of average roads of network Electrical path length and network total bandwidth.As shown in Figure 10 and Figure 11, compare for Fat-Tree, VLCcube can provide more networks band Width, and possess shorter average path length.The reason for causing these advantages is that to introduce extra VLC in VLCcube wireless Link.Meanwhile, it is observed that influence of the VLC wireless links to network average path for introducing show it is marginal successively decrease become Gesture.That is, when network size is smaller, VLC wireless links can significantly more reduce average path length.In fact, The value of given k, there is k in VLCcube2Bar VLC wireless links, and wired in network and wireless link sum is k3/2+k2.With The increase of k, the ratio that VLC wireless links account for total link number is gradually reduced, so as to cause the appearance of above-mentioned edge effect.
We are performed a plurality of times VLCcube construction methods, and select wherein optimal VLCcube constructing plans.For ease of Compare, the pod layers of connectedness of logic chart (quantity on side in pod layers of logic chart) of the VLCcube for being generated is by corresponding complete Figure normalization.In fig. 12, VLCcube1, VLCcube2 and VLCcube10 represent execution VLCcube construction methods 1 time respectively, The pod layers of connectedness of logic chart of the VLCcube structures obtained by 2 times and 10 times.Obviously, with the increase pod layers of logic chart of k Connectedness successively decrease, and the number of times for performing construction method is more, and the structure for obtaining is also more superior because more likely obtain compared with Good frame placement schemes.
The time of the routing algorithm of our also further comparison search wireline pathways, wireless path and mixed path is complicated Degree.Figure 13 have recorded three kinds of time overheads of algorithm, it can be seen that with the increase of network size, and the route of mixed path is calculated The time loss of method is continuously increased and both consumption are bigger than other.And the time overhead of wireless path routing algorithm also has not The disconnected trend for increasing, 0.575 millisecond is increased from 0.2 millisecond.It should be noted that the time of wireline pathway routing algorithm opens Pin is minimum and is stably maintained at very low level, i.e., 0.09 millisecond.So generally, the time of wireline pathway routing algorithm Complexity is constant, and other two kinds of complexities of routing algorithm respectively with k and k2It is directly proportional.
Therefore, according to result above, can be concluded that VLCcube can provide more networks bandwidth, and possess shorter average Path length, that is to say, that with more preferable topological performance.
Network throughput and packet loss when we also compares VLCcube and Fat-Tree using ECMP.In difference Flow rate mode under, from 6 increase to 60 by changing the value of k and adjust network size, and observe and record network throughput and Packet loss.In addition, our experiment also control mean flow size discloses shadow of the stream size to network performance from 5Mb to 300Mb Ring.But the stream size based on true Trace cannot change, and be determined by the data of Trace.In test every time, network The handling capacity that handling capacity k=60 in VLCcube is or the handling capacity normalization when mean flow size is 300Mb.
For Trace flows, the Trace of Yahoo used herein have recorded its 6 distributive data centers for a period of time It is interior every stream essential information, including source and destination server IP address, flow size and its port numbers used etc.. By recognizing the port numbers used by stream, it can be determined that the flow is certain data center's inner stream flow or the stream across data center Amount.Afterwards, we are injected separately into the k for randomly selecting to VLCcube and Fat-Tree in an experiment3Bar stream is assessing their property Energy.
Figure 14 and Figure 15 have recorded VLCcube and Fat-Tree to be increased to during 60 in the value of k from 6, in Trace flows Under handling capacity and packet loss.Result shows that compare for Fat-Tree, VLCcube can provide the handling capacity more than 8.5%, And reduce by 39% packet loss.Its profound cause is that VLCcube introduces wireless link so that every stream has more paths It is optional.
For Stride-2k flows, in the case where given stream mean size is 150Mb, we make the value of k increase from 6 To 60, as injection k3During bar stream, network throughput and packet loss are recorded, Figure 16 and Figure 17 respectively show experimental result.With The increase of k, VLCcube and Fat-Tree can transmit greater number of stream, therefore their handling capacity all constantly increases.So And, on average, VLCcube can provide 15.14% more handling capacities than Fat-Tree, and packet loss is also less.
Meanwhile, the influence for measurement stream size to performance, we fix k=36, and the mean size for flowing increases from 50Mb It is added to 300Mb, and to injecting k in network3Bar stream.As shown in Figure 18 and Figure 19, VLCcube is still better than Fat-Tree.Specifically For, or even when the mean size of stream is 150Mb, VLCcube possesses 14.31% more handling capacities, and packet loss is also small Much.
For stochastic-flow pattern, the source and destination server of stream are randomly selected, and similarly, are injected in network K3Bar stream.
First, the mean size that we fix stream is 150Mb, and determine the value of the k of network size increases to 60 from 6.Such as Shown in Figure 20, the handling capacity of VLCcube and Fat-Tree is all sharply increased, and on average, VLCcube is still more superior, than Many 10.44% handling capacities of Fat-Tree.Figure 21 shows, after k >=18, Fat-Tree is subjected to packet loss very high always, And the packet loss in VLCcube maintains low-level always.Specifically, VLCcube and Fat-Tree average packet loss It is respectively 0.27% and 2.45%.
The influence of our further stream of measurements sizes and fixed k=36.Figure 22 shows with Figure 23, with input stream it is big Small increase, handling capacity also constantly rises in network.At the same time, packet loss constantly rises.However, VLCcube is still maintained Larger advantage, for comparing Fat-Tree.Relatively low packet loss is kept while possessing higher throughput.
Therefore, it is experimentally confirmed, when ECMP dispatching algorithms are all used, even if under different Model of network traffic, VLCcube is more advantageous compared with Fat-Tree on network performance.
Although experiment has fully proved VLCcube better than Fat-Tree above, in fact, when using ECMP, The topological advantage of VLCcube does not have to be excavated completely.Therefore, we also have evaluated VLCcube using congestion sense in addition Performance during the stream dispatching algorithm known.
First, when k increases to during 42 from 6, we in VLCcube to introducing k3The stochastic-flow of bar batch.It is real Handling capacity when resulting handling capacity is by using ECMP dispatching methods in testing is normalized.As Figure 24 and Figure 25 institutes Show, while ECMP makes network obtain less handling capacity, also allow network to be limited by bigger packet loss.And conversely, we carry The SBF algorithms for mass flow for going out, can realize 1.54 times of handling capacities, and network packet loss rate is also very low (especially, in k From 6 increase to 18 during reduce obvious).Its basic reason is, SBF more path candidates for flow is provided, so that will stream Amount is disperseed as far as possible in VLCcube.
In addition, VLCcube with it is proposed that SOF algorithms come schedule sequences flow.In an experiment, we set the value of k Meter increases between 24 6, and equally injects k3Bar sequence stochastic-flow.The arrival time of stream is set as that it is λ's to obey parameter Poisson distribution.Figure 26 and Figure 27 have recorded experimental result.ECMP-x and SOF-x in figure are represented ECMP is used as λ=x respectively Result during with SOF dispatching methods.Experimental result shows, when λ=2 and λ=4, SOF can obtain 2.22 times and 5.56 than ECMP Times handling capacity, and packet loss is only respectively its 0.340 times and 0.178 times.It was noticed that when λ=4, ECMP and SOF are Result during better than λ=2.Its reason is, with the increase of λ, to flow the time relative distribution for reaching, while the stream for reaching compares It is few, so as to cause the probability for congestion occur smaller, and packet loss just corresponding reduction.
Therefore, SBF and SOF can effectively improve the network performance for improving VLCcube, compare for ECMP, can reduce Packet loss.
In sum, by means of above-mentioned technical proposal of the invention, deployment is easy to this paper presents one kind, it is dynamical new Type data center structure VLCcube.In order to lift existing cable data central site network Fat-Tree, use rise herein Visible light communication technology frame is interconnected into networking into wireless Torus structures, the visible ray link of introducing can effectively reduce networking Average path length simultaneously improves the network bandwidth.Meanwhile, separately design stream also directed to batch and two kinds of flow rate modes of sequence herein The stream scheduling method for perceiving is measured, the advantage of VLCcube is made full use of with this.By using setting up wireless 2D-Torus structures networking It is coupled with existing wired Fat-Tree rack constructions and makes mixed structure as a technological means for overall work, not Across the frame wireless connection for not needing any control is set up on the premise of changing the existing device at cable data center and being laid out, Cable data center is significantly extended under small cost, and lifts network flexibility.Meanwhile, it is split as using by the data flow of arrival Bulk stream and data flow, first dispatch the technological means that bulk stream dispatches data flow again, further improve the network of mixed structure Performance.Experimental evaluation result shows, VLCcube is significantly better than Fat-Tree, and set forth herein stream dispatching algorithm can fully carry The performance of VLCcube high.
Those of ordinary skill in the art should be understood:Specific embodiment of the invention is the foregoing is only, and The limitation present invention, all any modification, equivalent substitution and improvements within the spirit and principles in the present invention, done etc. are not used in, Should be included within protection scope of the present invention.

Claims (3)

1. a kind of blended data central site network stream scheduling method based on congestion coefficient, it is characterised in that including:
Determine wired Fat-Tree structures and wireless 2D-Torus structures between multiple frames in blended data central site network system In link, and the route between any frame pair;
The data flow a plurality of to be implanted of Batch Arrival is obtained, every bulk stream is specified successively, search for it by described wired The all path candidates for constituting are linked in Fat-Tree structures and wireless 2D-Torus structures;Calculate respectively described in each The congestion coefficient of link, and each congestion system of the path candidate is calculated according to the congestion coefficient of each link Number;Choose transmission path of the minimum path candidate of congestion coefficient as the given batch size stream;Wherein, the candidate road Footpath includes:k2/ 4 finite paths, a wireless path and a mixed paths, wherein k are the wireless 2D-Torus structures In pod layer numbers;Wherein, every bulk stream is only transmitted on a path candidate, the congestion coefficient of the link The congestion coefficient of the maximum no more than path candidate, wherein, the congestion coefficient of the link is represented and linked using this Stream quantity maximum, the congestion coefficient of the path candidate represented by the stream quantity maximum of this paths;
The data flow a plurality of to be implanted that sequence is reached persistently is obtained, the new stream for reaching and the old stream for needing to retransmit is got, will be described The new stream of arrival needs the sequence flows of scheduling with the old stream positioning for needing to retransmit;Update wired Fat-Tree structures with it is wireless Every congestion coefficient of link in 2D-Torus structures;Every sequence flows are specified successively, search for it by described wired The all path candidates for constituting are linked in Fat-Tree structures and wireless 2D-Torus structures;Calculate respectively described in each The congestion ratio of link, and each congestion ratio of the path candidate is calculated according to the congestion ratio of each link, wherein, The congestion ratio is the increment of congestion coefficient;Choose biography of the minimum path candidate of congestion ratio as the given batch size stream Defeated path;Wherein, every sequence flows are only transmitted on a path candidate, and the maximum of the congestion coefficient of the link is not The congestion coefficient of the path candidate is can exceed that, wherein, the congestion coefficient of the link represents the stream quantity linked using this Maximum, the congestion coefficient of the path candidate is represented by the stream quantity maximum of this paths.
2. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 1, its feature It is to determine that the route in blended data central site network system between any frame pair includes:
K pod is set in the multiple frame, and puts each frame under in pod k pod layers of formation;
For described each frame is identified according to pod Layer assignments and builds pod layers of logic chart;
Calculate path P aths of the pod to pod where target frame on the pod layers of logic chart where initial framehp
Select PathhpIn each wireless connection so that the PathhpRadio connection paths it is most short, and obtain in frame On path P athht
Addition polymerization layer switch and the wired connection addition Path that will be neededhtIn, obtain route Pathh
3. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 2, its feature It is that the mark includes mark prefix and mark suffix, is that described each frame is identified according to pod Layer assignments and builds pod Layer logic chart includes:
For any x ∈ [0, k], k/2 is randomly selected without mark prefix frame, the mark prefix that will be selected frame is put It is x, and ensures that the mark prefix of the adjacent frame of any two is different;
For each frame sets the mark suffix, the mark suffix span is 0 to k/2-1, is marked described in any two The mark suffix for knowing prefix identical frame is different;
Calculate the pod layers of connectedness of logic chart under current identification;
Repeat above-mentioned steps repeatedly, choose connective maximum mark allocative decision and generate result as pod layers of logic chart.
CN201611047098.1A 2015-12-30 2016-11-21 A kind of blended data central site network stream scheduling method based on congestion coefficient Active CN106453084B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511023066 2015-12-30
CN2015110230663 2015-12-30

Publications (2)

Publication Number Publication Date
CN106453084A CN106453084A (en) 2017-02-22
CN106453084B true CN106453084B (en) 2017-07-07

Family

ID=58219462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611047098.1A Active CN106453084B (en) 2015-12-30 2016-11-21 A kind of blended data central site network stream scheduling method based on congestion coefficient

Country Status (1)

Country Link
CN (1) CN106453084B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302711B (en) * 2018-08-24 2021-08-13 西安电子科技大学 Energy-saving deployment method of reconfigurable Fat-Tree hybrid data center network
CN115086185B (en) * 2022-06-10 2024-04-02 清华大学深圳国际研究生院 Data center network system and data center transmission method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075394A (en) * 2011-01-14 2011-05-25 清华大学 P2i interconnecting structure-based data center
WO2013156903A1 (en) * 2012-04-20 2013-10-24 Telefonaktiebolaget L M Ericsson (Publ) Selecting between equal cost shortest paths in a 802.1aq network using split tiebreakers
WO2014198053A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Fault tolerant and load balanced routing
CN104767694A (en) * 2015-04-08 2015-07-08 大连理工大学 Data stream forwarding method facing Fat-Tree data center network architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075394A (en) * 2011-01-14 2011-05-25 清华大学 P2i interconnecting structure-based data center
WO2013156903A1 (en) * 2012-04-20 2013-10-24 Telefonaktiebolaget L M Ericsson (Publ) Selecting between equal cost shortest paths in a 802.1aq network using split tiebreakers
WO2014198053A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Fault tolerant and load balanced routing
CN104767694A (en) * 2015-04-08 2015-07-08 大连理工大学 Data stream forwarding method facing Fat-Tree data center network architecture

Also Published As

Publication number Publication date
CN106453084A (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN105634953B (en) A kind of networking of blended data center and method for routing based on visible light communication
CN108401015A (en) A kind of data center network method for routing based on deeply study
US20140334820A1 (en) Method and system for configuring a connection-oriented packet network over a wavelength division multiplexed optical network
CA2490075A1 (en) Integrated wireless distribution and mesh backhaul networks
KR101548695B1 (en) Apparatus and method for topology design of hybrid optical networks on chip
CN107005480A (en) The system and method cooperated for SDT and NFV and SDN
CN106165492A (en) Distributed Route Selection in wireless network
CN108111411A (en) Backbone network and its active path planning system and planing method
CN112104491B (en) Service-oriented network virtualization resource management method
CN104518961B (en) It is grouped the leading method and device of optical transport network
CN106453084B (en) A kind of blended data central site network stream scheduling method based on congestion coefficient
CN105472484A (en) Wave channel balancing route wavelength allocation method of power backbone optical transport network
CN108966053A (en) A kind of cross-domain route computing method of multiple-domain network dynamic domain sequence and device
CN107147530A (en) A kind of virtual network method for reconfiguration based on resource conservation
CN106713138A (en) Cross-domain transmission method and apparatus of streaming data
CN101330411B (en) Method and system for simulating large-scale network topological
CN106911521B (en) Based on polycyclic network on mating plate Topology Structure Design method
CN102025615B (en) Method and device for planning paths of small-granularity services in optical communication network
CN103795641B (en) The optical network resource management method mapped based on multidimensional frame
CN105430538B (en) A kind of inter-domain routing method based on optical-fiber network subtopology figure
CN102523170B (en) Method for configuring regenerators in wavelength division multiplexing optical network
CN105453489A (en) Improved ring topology structure and application method thereof
CN107181680A (en) A kind of method for realizing SDO functions, system and SDON systems
CN104093182A (en) Method for acquiring a plurality of reliable communication paths based on field intensity in multi-layer wireless network
CN107959642A (en) For measuring the methods, devices and systems of network path

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant