CN106453084A - Mixed data center network flow scheduling method based on congestion coefficients - Google Patents

Mixed data center network flow scheduling method based on congestion coefficients Download PDF

Info

Publication number
CN106453084A
CN106453084A CN201611047098.1A CN201611047098A CN106453084A CN 106453084 A CN106453084 A CN 106453084A CN 201611047098 A CN201611047098 A CN 201611047098A CN 106453084 A CN106453084 A CN 106453084A
Authority
CN
China
Prior art keywords
stream
path
congestion coefficient
frame
congestion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611047098.1A
Other languages
Chinese (zh)
Other versions
CN106453084B (en
Inventor
郭得科
罗来龙
任棒棒
苑博
刘云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Publication of CN106453084A publication Critical patent/CN106453084A/en
Application granted granted Critical
Publication of CN106453084B publication Critical patent/CN106453084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a mixed data center network flow scheduling method based on congestion coefficients. The mixed data center network flow scheduling method comprises the steps of: determining links in a wired Fat-Tree structure and a wireless 2D-Torus structure among a plurality of frames in a mixed data center network system, and a routing between random frame pairs; acquiring a plurality of pieces of bulk-arrival to-be-injected data flow, and designating a transmission path with the minimum congestion coefficient sum for each piece of bulk flow; and continuously acquiring a plurality of pieces of sequence-arrival to-be-injected data flow, and designating a transmission path with the minimum congestion coefficient increment sum for each piece of sequence flow. According to the mixed data center network flow scheduling method disclosed by the invention, by using a technical means of partitioning the data flow which is to arrive into the bulk flow and the data flow, firstly scheduling the bulk flow and then scheduling the data flow, network performance of a mixed structure is further promoted.

Description

A kind of blended data central site network stream scheduling method based on congestion coefficient
Technical field
The present invention relates to mixed communication field, especially, is related to a kind of blended data central site network based on congestion coefficient Stream scheduling method.
Background technology
Data center is the infrastructure of application on site and basic sex service.Thousands of server and switch pass through Data center network (DCN, data center network) interconnects.And current data center network includes two big masters Want school, i.e. cable data center and wireless data hub.The networking of cable data central interior server and switch is relied on In wire link, such as twisted-pair feeder, optical fiber.Fat-Tree and VL2 just belong to this class;Group host inside wireless data hub Wireless communication link to be relied on realizing, or by frame interconnection be wireless network, or by Servers-all and switch even It is connected into as full Wireless network structure.
There is natural defect in cable data central site network.First, otherwise cable data center be excessive oversubscription i.e. Enable and maintain good network performance substantial amounts of cost;It is exactly excessively to conform to the principle of simplicity to come reduces cost but it cannot be guaranteed that preferably Network performance.Secondly, existing data center and its difficult and complicated is extended.Again, cable data center needs substantial amounts of connecing Line and maintenance cost.Finally, large-scale cable data center generally adopts multiple structure.The result for causing is two and belongs to different The server of frame, even if physically distance is very near also must could realize communication using upper strata link.
For cable data Center Extender high cost in prior art and the problem of very flexible, not yet have at present effectively Solution.
Content of the invention
In view of this, dispatch it is an object of the invention to proposing a kind of blended data central site network stream based on congestion coefficient Method, can set up on the premise of the existing device at cable data center and layout is not changed do not need any control across machine Frame wirelessly connects, and significantly extends cable data center, and lifts network flexibility, while improving internetworking under little cost Energy.
Based on above-mentioned purpose, the technical scheme that the present invention is provided is as follows:
According to an aspect of the invention, there is provided a kind of blended data central site network stream dispatching party based on congestion coefficient Method, including:
Determine the wired Fat-Tree structure in blended data central site network system between multiple frames and wireless 2D-Torus Link in structure, and the route arbitrarily between frame pair;
The data flow a plurality of to be implanted of Batch Arrival is obtained, and is to specify a congestion coefficient sum minimum per bar bulk stream Transmission path;
The data flow a plurality of to be implanted of sequence arrival is persistently obtained, and is to specify a congestion coefficient increment per bar sequence flows The transmission path of sum minimum.
Wherein, it is that the transmission path for specifying a congestion coefficient sum minimum per bar bulk stream includes:
Specify per bar bulk stream successively, search for its chain in by wired Fat-Tree structure and wireless 2D-Torus structure Connect all path candidates of composition;
Calculate the congestion coefficient of each link respectively, and each time is calculated according to the congestion coefficient of each link The congestion coefficient in routing footpath;
Choose the transmission path of the path candidate as given batch size stream of congestion coefficient minimum.
Also, only transmit on a path candidate per bar bulk stream, the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum using this link, path candidate Congestion coefficient represent the stream quantity maximum through this paths.
Also, search for all times that its linking in by wired Fat-Tree structure and wireless 2D-Torus structure is constituted Routing footpath, including k2/ 4 finite paths, a wireless path and a mixed path, wherein k is wireless 2D-Torus structure In pod layer number.
Meanwhile, it is that the transmission path for specifying a congestion coefficient increment sum minimum per bar sequence flows includes:
The old stream for obtaining the new stream for reaching and needing to retransmit, the new stream for reaching is needed to adjust with the old stream positioning for needing to retransmit The sequence flows of degree;
The congestion coefficient for linking in wired Fat-Tree structure and wireless 2D-Torus structure is updated per bar;
Specify per bar sequence flows successively, search for its chain in by wired Fat-Tree structure and wireless 2D-Torus structure Connect all path candidates of composition;
Calculate the congestion ratio of each link respectively, and each candidate road is calculated according to the congestion ratio of each link The congestion ratio in footpath, wherein, congestion ratio is the increment of congestion coefficient;
Choose the transmission path of the path candidate as given batch size stream of congestion ratio minimum.
Also, only transmit on a path candidate per bar sequence flows, the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum using this link, path candidate Congestion coefficient represent the stream quantity maximum through this paths.
In addition, determining that the route in blended data central site network system arbitrarily between frame pair includes:
K pod is set in multiple frames, and each frame is put under k pod layer of formation in pod;
Pod layer logic chart is identified and is built according to pod Layer assignment for each frame;
Calculate starting path P ath of the frame place pod to target frame place pod on pod layer logic charthp
Select PathhpIn each wireless connect so that PathhpRadio connection paths most short, and obtain in frame On path P athht
Add polymerization layer switch and the wired connection for needing is added PathhtIn, obtain route Pathh.
Also, identifying includes to identify prefix and mark suffix, is that each frame is identified according to pod Layer assignment and builds pod Layer logic chart includes:
For any x ∈ [0, k], randomly select k/2 and no prefix frame is identified, the mark prefix for being selected frame is put For x, and ensure the mark prefix difference of the adjacent frame of any two;
Mark suffix is set for each frame, mark suffix span is 0 to k/2-1, and any two identifies prefix phase The mark suffix difference of same frame;
Calculate the connectedness of pod layer logic chart under current identification;
Repeat above-mentioned steps repeatedly, choose connective maximum mark allocative decision and generate as pod layer logic chart As a result.
From the above it can be seen that the technical scheme that the present invention is provided is by using setting up wireless 2D-Torus structure group Net is coupled with existing wired Fat-Tree rack construction and makes mixed structure as the technological means of an overall work, Not changing the existing device at cable data center being wirelessly connected across frame for any control is not needed with setting up on the premise of layout, Cable data center is significantly extended under little cost, and lifts network flexibility;Meanwhile, split using the data flow that will reach For bulk stream data stream, first the technological means that bulk stream dispatches data flow again are dispatched, improve the net of mixed structure further Network performance.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment The accompanying drawing for using is needed to be briefly described, it should be apparent that, drawings in the following description are only some enforcements of the present invention Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also obtain according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention Flow chart;
Fig. 2 is the annexation in prior art in wireless data hub network system between frame and transmission of wireless signals Schematic diagram;
Fig. 3 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure for just signal transceiver of light source being received;
Fig. 4 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the Illumination Distribution figure that side receives is arranged to the signal transceiver of light source;
Fig. 5 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, it is arranged at the Illumination Distribution figure that the signal transceiver back to light source is received;
Fig. 6 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the Illumination Distribution figure that side receives is arranged to the signal transceiver of light source;
Fig. 7 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the frame top level view of blended data central site network system;
Fig. 8 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the frame layer logic chart of VLCcube;
Fig. 9 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the pod layer logic chart of VLCcube;
Figure 10 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the average path length-k value column comparison diagram of VLCcube and Fat-Tree;
Figure 11 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the network total bandwidth-k value column comparison diagram of VLCcube and Fat-Tree;
Figure 12 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the pod layer connectivity tolerance-k value column comparison diagram of VLCcube and Fat-Tree;
Figure 13 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the routing algorithm complexity metric-k value column comparison diagram of VLCcube and Fat-Tree;
Figure 14 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the VLCcube and Fat-Tree under Trace flow-k value column comparison diagram;
Figure 15 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the VLCcube and Fat-Tree under Trace flow-k value column comparison diagram;
Figure 16 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the VLCcube and Fat-Tree under Stride-2k flow-k value column comparison diagram;
Figure 17 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the VLCcube and Fat-Tree under Stride-2k flow-k value column comparison diagram;
Figure 18 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the VLCcube and Fat-Tree under Stride-2k flow-stream size column comparison diagram;
Figure 19 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the VLCcube and Fat-Tree under Stride-2k flow-stream size column comparison diagram;
Figure 20 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the VLCcube and Fat-Tree under stochastic-flow-k value column comparison diagram;
Figure 21 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the VLCcube and Fat-Tree under stochastic-flow-k value column comparison diagram;
Figure 22 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the VLCcube and Fat-Tree under stochastic-flow-stream size column comparison diagram;
Figure 23 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the VLCcube and Fat-Tree under stochastic-flow-stream size column comparison diagram;
Figure 24 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, handling capacity of the ECMP and SBF under mass flow-k value column comparison diagram;
Figure 25 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, packet loss of the ECMP and SBF under mass flow-k value column comparison diagram;
Figure 26 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the handling capacity of ECMP-2, ECMP-4, SOF-2 and SOF-4 under sequence flow-k value column comparison diagram;
Figure 27 is a kind of blended data central site network stream scheduling method based on congestion coefficient according to the embodiment of the present invention In, the packet loss of ECMP-2, ECMP-4, SOF-2 and SOF-4 under sequence flow-k value column comparison diagram.
Specific embodiment
For making the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is carried out further clear, complete, describe in detail, it is clear that described Embodiment is only a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, this area The every other embodiment obtained by those of ordinary skill, belongs to the scope of protection of the invention.
Have the cost in cable data central process and increase network flexibility to reduce extension, some are directed to frame The wireless data hub network structure of aspect is suggested.As shown in Fig. 2 the frame in network is connected to one by wireless link Rise, the signal between different frames is reflexed on target machine by mirror surface by accurately adjusting launch angle.Especially Ground, 60GHz technology for radio frequency and free-space communication technology (FSO, Free-Space-Optical) technology are used to build Wireless link between vertical frame.These similar schemes can improve the bandwidth at cable data center and reduce packet delay.In addition, no Line link can be by dynamic restructuring to meet the demand of current communication mode.
Between existing frame, wireless data hub method for designing is directed generally to the reconfigurability of wireless link.However, but Ignore in design object rear both.First, worked the periphery that must upgrade or rebuild data with existing center completely Environment.Available data centre ceiling must be decorated by employing 60GHz for example, before and the design of FSO communication technology The over the horizon for becoming a mirror surface to realize signal is transmitted.In addition, in order to realize reconfigurability, it is necessary to using specific light Equipment.For example, ceiling minute surface, convex lenss/concavees lens etc..More fatal, when wireless link is configured, they often need Optical network device and infrastructure are carried out frequently and the control operation of complexity.
According to one embodiment of present invention, there is provided a kind of blended data central site network stream based on congestion coefficient is dispatched Method.
As shown in figure 1, the blended data central site network stream based on congestion coefficient for providing according to embodiments of the present invention is dispatched Method includes:
Step S101, determine wired Fat-Tree structure in blended data central site network system between multiple frames with wireless Link in 2D-Torus structure, and the route arbitrarily between frame pair;
Step S103, obtains the data flow a plurality of to be implanted of Batch Arrival, and is to specify a congestion system per bar bulk stream The transmission path of number sum minimum;
Step S105, persistently obtains the data flow a plurality of to be implanted of sequence arrival, and is to specify one per bar sequence flows to gather around The transmission path of plug coefficient increment sum minimum.
Wherein, it is that the transmission path for specifying a congestion coefficient sum minimum per bar bulk stream includes:
Specify per bar bulk stream successively, search for its chain in by wired Fat-Tree structure and wireless 2D-Torus structure Connect all path candidates of composition;
Calculate the congestion coefficient of each link respectively, and each time is calculated according to the congestion coefficient of each link The congestion coefficient in routing footpath;
Choose the transmission path of the path candidate as given batch size stream of congestion coefficient minimum.
Also, only transmit on a path candidate per bar bulk stream, the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum using this link, path candidate Congestion coefficient represent the stream quantity maximum through this paths.
Also, search for all times that its linking in by wired Fat-Tree structure and wireless 2D-Torus structure is constituted Routing footpath, including k2/ 4 finite paths, a wireless path and a mixed path, wherein k is wireless 2D-Torus structure In pod layer number.
Meanwhile, it is that the transmission path for specifying a congestion coefficient increment sum minimum per bar sequence flows includes:
The old stream for obtaining the new stream for reaching and needing to retransmit, the new stream for reaching is needed to adjust with the old stream positioning for needing to retransmit The sequence flows of degree;
The congestion coefficient for linking in wired Fat-Tree structure and wireless 2D-Torus structure is updated per bar;
Specify per bar sequence flows successively, search for its chain in by wired Fat-Tree structure and wireless 2D-Torus structure Connect all path candidates of composition;
Calculate the congestion ratio of each link respectively, and each candidate road is calculated according to the congestion ratio of each link The congestion ratio in footpath, wherein, congestion ratio is the increment of congestion coefficient;
Choose the transmission path of the path candidate as given batch size stream of congestion ratio minimum.
Also, only transmit on a path candidate per bar sequence flows, the maximum of the congestion coefficient of link is no more than The congestion coefficient of path candidate, wherein, the congestion coefficient of link represents the stream quantity maximum using this link, path candidate Congestion coefficient represent the stream quantity maximum through this paths.
In addition, determining that the route in blended data central site network system arbitrarily between frame pair includes:
K pod is set in multiple frames, and each frame is put under k pod layer of formation in pod;
Pod layer logic chart is identified and is built according to pod Layer assignment for each frame;
Calculate starting path P ath of the frame place pod to target frame place pod on pod layer logic charthp
Select PathhpIn each wireless connect so that PathhpRadio connection paths most short, and obtain in frame On path P athht
Add polymerization layer switch and the wired connection for needing is added PathhtIn, obtain route Pathh.
Also, identifying includes to identify prefix and mark suffix, is that each frame is identified according to pod Layer assignment and builds pod Layer logic chart includes:
For any x ∈ [0, k], randomly select k/2 and no prefix frame is identified, the mark prefix for being selected frame is put For x, and ensure the mark prefix difference of the adjacent frame of any two;
Mark suffix is set for each frame, mark suffix span is 0 to k/2-1, and any two identifies prefix phase The mark suffix difference of same frame;
Calculate the connectedness of pod layer logic chart under current identification;
Repeat above-mentioned steps repeatedly, choose connective maximum mark allocative decision and generate as pod layer logic chart As a result.
Technical scheme is expanded on further below according to specific embodiment VLCcube.
VLCcube be it is proposed that a kind of representational cable data central site network structure Fat-Tree enhancing knot Structure.Institute's organic frame is become wireless Torus knot with VLC (Visible Light Communication) link networking by VLCcube Structure, and form the coupled structure of wired Fat-Tree and wireless Torus.
VLC communication technology is that modulation LED (Lighting Emitting Diodes) or LD (Laser Diodes) send out The visible ray for going out is realizing signal transmission.VLC communication technology adopts OOK (On-Off Keying) modulation scheme, that is, receive light Signal just represents logic 1, is not received by just representing logical zero.
In terms of data transfer rate, during using high frequency LED light source, monochromatic light VLC communication technology can realize the data transfer rate of 3Gbps; And when using three coloured light, data transfer rate will be extended to 10Gbps.If using LD, single 450nm laser beam just can be realized The data transfer rate of 9Gbps.The data transfer rate of VLC communication technology is entirely capable of being competent at the data transportation requirements of data center.
In terms of transmission range, 10Gbps bandwidth in the range of 10 meters can be realized based on the VLC communication technology of LED light source, this Communication task data center between adjacent rack has been undertaken enough.The project of one entitled Rojia extends VLC communication technology Communication distance to 1.4 kms, although data transfer rate is limited.In addition, (km over long distances can be realized based on the VLC communication technology of LD Rank) high rate communication, because laser has good linear property.LED-based VLC communication technology can be used for counting by we According in intracardiac short haul connection, and the VLC communication technology based on LD is used as long haul communication means.
In terms of availability, VLC communication equipment, the i.e. transceiver of full duplex formula, successfully developed and to external-open Releasing is sold.The development platform of one entitled MOMO can for developer provide exploitation based on VLC communication technology application API and SDK tool kit.Such as, VLC communication technology seamlessly can be merged with Internet of Things, provide position location services of interior etc..Separately Outward, PureLiFi can for developer provide rapid configuration and test based on LED facility can be by optic communication related application.
In sum, VLC communication technology can be used the communication service in data center network, and will not bring volume Outer wiring cost, it is not required that data with existing central hardware environment is made and big must be changed.
Typically, each frame top can configure several VLC transceivers so as to frame interconnection is become specific nothing Line topological structure.Frame R is given, when having multiple neighbours while during to its sending signal, when the transceiver on R top can connect These signals are received, if can not but efficiently differentiating out, interference arises that, cause R be correctly decoded the signal for receiving.
We assess the interference when being incorporated in data center using professional optical simulation software TracePro70 Situation.In a frame, we place the transceiver of four orthogonal directions, are followed successively by T1, T2, T3 and T4.We allow a branch of LED visible light is sent from 3 meters with export-oriented T1, is then characterized each transceiver with the Illumination Distribution figure of each transceiver and is received Optical signal number.If if T2, T3 and T4 can receive enough visible rays, then prove that they receive significantly Interference.
Shown in Fig. 3 to Fig. 6 is T successively1、T2、T3、T4Observed result.It will be apparent that T1 captures most of light Signal, and the middle position of transceiver is concentrated in the optical signal for capturing.Due to the scattering in visible ray communication process, some Light off-center position, thus non-central part can also perceive some illumination.And on the contrary, other three transceivers can only Few optical signal is received, because only that seldom several positions can perceive the normalization illumination of 0.001 unit;Especially, T3 does not almost receive optical signal.Its reason is the dead astern that T3 is located at T1, and light is difficult to bypass T1 arrival T3.Therefore, to T1 The optical signal of transmitting is very limited to the interference of other 3 transceivers.And it is quite reasonable to place 4 transceivers on frame top , the interference problem that brings is very little.There is this observation conclusion as support, we introduce data center VLC technology And VLCcube is devised, each frame places the orthogonal transceiver in 4 directions.
Inside data center, each server is connected and access network with the frame topcross of frame which is located. For typical cable data center, these frames are all to become a layering knot by the switch on upper strata and two networkings of link Structure, and non-immediate interconnection networking.Therefore, we are conceived to the frame in cable data at heart by the direct networking of wireless link Become special topological structure.Herein, we, by taking the Fat-Tree being the most widely used instantly as an example, its frame networking are nothing Line Torus structure.By this way, we just construct mixed structure VLCcube, and seamlessly by wired Fat-Tree structure Blend with wireless Torus structure.
Fig. 7 is illustrated that the frame top level view of VLCcube wireless portion, as shown in fig. 7, the institute in Fat-Tree is organic Frame all uses VLC networking for the wireless Torus structure of 2 dimensions.Have in the Torus structure per a line and m frame is had, and per string then There is n frame.On each frame top, 4 visible optical transceivers are configured to the direction orthogonal towards four, to keep away as far as possible Exempt from interference each other.It should be noted that the wireline side of VLCcube maintains Fat-Tree structure constant, Wo Mensuo Do be institute organic frame VLC link networking is Torus.Represent the port number of each switch with k, and k is even number.With Fat-Tree is the same, and VLCcube has k pod, and each pod is comprising k/2 frame topcross and k/2 aggregation switch.Cause This, in general, the wireless Torus of VLCcube is related to k altogether2/ 2 frame topcross.
In order to ensure the performance of VLCcube, 2 dimension Torus must be well-designed.Two large problems are had to need to solve so as to fully Play the advantage of wireless link:The setting of m and n and the Placement Problems of frame.In 2 dimension Torus, the switch of each dimension A circle is all connected to become, so the network diameter of the Torus is (m+n)/2.So VLCcube needs to arrange suitable m and n To minimize network diameter.In addition, the quantity of 2 dimension Torus medium-long range links is also m+n, but the number of these remote linkage It is limited according to rate, minimizes the total bandwidth that m+n can improve network.For the consideration of this respect, we are also required to minimize m+ n.As for frame Placement Problems, or as the path in Fat-Tree between any two frame is 2 jumps, or being 4 Jump, in order to minimize the network diameter of VLCcube, those must be separated by the frame of 4 jumps by the VLC wireless link of introducing as far as possible Directly link.
If 2 dimension Torus need to accommodate k2/ 2 frames, then parameter m and n must are fulfilled for:
m*(n-1)<k2/2≤m*n
In VLCcube, optimized parameter configuration isAnd the value of n then depends on k2/2.Such as Really (m-1)2<k2/ 2≤m* (m-1), n take (m-1);Otherwise, if m* (m-1)<k2/2≤m2, then the value of n as m, i.e.,
The value of m and n needs to minimize m+n.AndTherefore, when and Only as m=n, m+n reaches minima.Consider inequality (m-1) again2<k2/ 2≤m* (m-1), just can obtain m, n and k tri- Relation between person.
For given m and n, next it is exactly the Placement Problems for considering frame.In Fat-Tree, if a pair of frame Belong to same pod, then the path between them is jumped for 2, otherwise, need 4 to jump.VLCcube is jumped between frame 4 as far as possible Wireline pathway shorten to 1 jump wireless path.That is, the VLC connection in VLCcube be necessarily used for interconnecting those be not belonging to The frame of one pod.
In order to clearly illustrate frame Placement Strategy, we introduce the concept of the mark of frame first.Fig. 8 is illustrated that one Frame mark figure in individual data center, in VLCcube, each frame has unique mark.The mark is by two parts group Become, prefix and suffix.The span of prefix be 0 to k, expression be which pod is the frame belong to.And suffix is then arrived 0 Value between k/2, expression is numbering of the frame inside pod.Give an example, what mark 51 represented is in the 6th pod 2nd frame.
We also introduce pod layer logic chart, as shown in Figure 9.Pod layer logic chart regards each pod as a node, if If there are one or more of wireless links between two pod, in pod layer logic chart, between respective nodes, one is being added Side.In the middle of Fig. 8 and VLCcube shown in Fig. 9, k=6, m=5, n=4.According to defined above, can derive corresponding Pod layer logic chart.Connective to weigh which with the quantity on side in pod layer logic chart herein.In the pod layer logic chart of example, 6 nodes and 15 sides are had, has constituted a complete graph.Therefore, the value of given k, the sum on pod layer logic chart side is little In k* (k-1)/2.
There is the value of defined above and given k, m and n, we devise three steps to construct the wireless Torus of 2 dimensions.Just As shown in Fig. 8 is with Fig. 9, it is possible to which the Torus of acquisition is not complete Torus on stricti jurise.
The first step, allocation identification prefix.For any x ∈ [0, k], we randomly select k/2 frame, and by they Prefix be set to x.Each prefix is allocated to be because there be k/2 frame in each pod for k/2 time.This step needs satisfaction Unique constraint is that any frame all can not have identical prefix with any one in its four neighbours.In the event of punching Prominent, until all prefixes are all assigned in figure the step for repetition.
Second step, calculates mark suffix.In frame layer logic chart, each frame has a suffix by itself and other Frame in same pod is distinguished.And the span of suffix is 0 to k/2.
3rd step, improves the connectedness of pod layer logic chart.We are multiple by repeating two above step, calculate each Executing the connectedness of the pod layer logic chart for obtaining and the allocative decision of connective maximum is chosen as final result.
We further demonstrate that above step can derive correctly legal VLCcube.
When k >=4, above step can obtain a feasible VLCcube structural scheme, and each pod is in frame layer Occur k/2 time in logic chart.
In the first step, we ensure that and distributed k/2 time in frame layer logic chart by each pod, and each VLC Link can only interconnect two different pod.If if in frame layer logic chart, each frame is caught a kind of color, this is of equal value Can be by k kind color dyes in proof frame layer logic chart.It is true that the frame layer logic chart of VLCcube is 4- regular graph, also It is to talk about chromatic number for 4.The figure just successfully can be dyeed by 4 kinds of colors, therefore, when k >=4, the possible configuration scheme one of VLCcube Fixed be present.
Meanwhile, the pod layer logic chart of VLCcube must be connection.Otherwise, it is assumed that there are VLC link can not be reached If pod, the performance of VLCcube will not ensure that.
Meanwhile, the pod layer logic chart of the VLCcube that three above step is obtained is connection.
It should be noted that frame layer view is one 2 dimension Torus structure, no matter it is complete Torus or incomplete Torus, it is all a connected graph.That is, giving any frame xiyi, it can find one and reach any purpose frame xjyjPath, when when on the map paths to pod layer logic chart, just find one from podxiTo podxjPath.Cause This, the pod layer logic chart of VLCcube is connection.
Above-mentioned demonstration ensure that the reasonability of VLCcube building method.3rd step is then carried by repeating choosing optimum The connectedness of pod layer logic chart is risen.The theoretical foundation of do so is, after being performed a plurality of times, more likely obtains more excellent solution. We can verify its concrete effect in follow-up experiment.
As viewed from the perspective of topology design, VLCcube is integrated with the topological property of Fat-Tree and Torus, including extension Property, constant degree, multipath and fault-tolerance.In addition, VLCcube tends to have the characteristic (visible light communication in deployment and plug and play Equipment is once placed, and no longer needs follow-up adjustment and control in use).It is also to be noted that VLCcube reality The wireless networking of frame aspect is showed, any change has not been made to existing Fat-Tree structure and machine room surrounding.
For any pair frame, between them and wireline pathway, wireless path and wire and wireless mixed path is deposited. In the present embodiment, we focus on the compounded link routing algorithm of design VLCcube.In order to minimize network congestion, we VLCcube network congestion coefficient is modeled, and is respectively directed to the stream that mass flow and sequence flow propose congestion aware Dispatching algorithm.
Give any pair frame, mixed path PathhIn both contained expired air, also contains wireless link.? That is, when mixed logic dynamic algorithm is designed, it is necessary to consider the topological property of Fat-Tree and Torus.According to VLCcube The feature of itself, we devise a kind of mixed logic dynamic algorithm under top.Assume source frame and destination frame respectively For xiyiAnd xjyj, we obtain first from podx in pod layer logic chartiTo podxjPath, then by the path of pod aspect Embody frame aspect.During materialization, need to select rational VLC link.Finally, the wired chain being involved in Connect and be added in the middle of path.
First, the path from source pod to purpose pod pod layer logic chart, Path are calculatedhp.This step is fairly simple, Because only having k node in pod layer logic chart.
Then, Path is selectedhpIn each wireless link, that is, path P ath of computer rack aspectht.Cause For there may be a plurality of optional wireless link between a pair of pod, and select different wireless links cause different links Length.Therefore, in PathhpIn each jump should all select those to cause the wireless link of shortest path.Shown in Fig. 9 In VLCcube, source frame is set to 11 by us, and destination frame is set to 41.In pod layer logic chart, pod 1 and pod 4 It is neighbours.And in frame layer logic chart, have three optional links directly to interconnect pod 1 and pod 4, i.e., WithIf selectedIf, frame 11 needs to be transferred to 10, and 40 need to forward number According to 41, the result for causing is that pod 1 and pod 4 is required for a polymerization layer switch as relaying.If however, choosingOrA polymerization layer switch is then only needed to respectively as relaying, i.e., 11 to 12 and 11 to 10 One is needed to jump relaying.So,5 jump paths can be caused, andOrOnly need to 4 Jump.
Finally, required expired air is added to path P athhtIn.This step be to path in add necessary poly- Close layer switch.In each pod, polymerization layer switch and frame topcross constitute Complete Bipartite Graph, therefore, in the pod Arbitrarily polymerization layer switch can serve as the relaying between any two frame topcross.So, in this step, Wo Mensui Required polymerization layer switch chosen by machine.
Using above three step, the most short mixed path between any two frame can be calculated.The time of the first step is multiple Miscellaneous degree is O (k2), the time complexity of second step and the 3rd step is O (0).Therefore, the time complexity of the routing algorithm is O (k2).It should be noted that what k represented is switch ports themselves quantity (often below 100), so O (k2) complexity be permissible Receive.
We introduce wireless link to lift its network performance at cable data center, and Main Means are to use institute's organic frame Wireless link networking is Torus structure.In order to give full play to the effect of VLC link and minimize network delay, it is proposed that Scheduling model is to optimize batch and the network congestion coefficient under sequence flow.In Torus, as each frame is mounted with 4 Transceiver so that arbitrarily frame can be while with its 4 neighbours while communicate.Here, we are firstly introduced into necessary concept And definition.
We represent a data center network with G (V, E), and wherein V and E represents node set and line set respectively.Separately Outward, F=(f1,f2,…fδ) represent the δ bar stream being injected in network.For each stream fi=(si,di,bi), si, diAnd biPoint Do not represent the source switch of the stream, purpose end switch and required bandwidth.φ represent a kind of can be in the scheduling of Successful transmissions F Scheme.
Define 1:Given F and φ, in network, the congestion coefficient of any one link e is defined as
Wherein, t (e) and c (e) represent the capacity (bandwidth) of flow by linking e and link e respectively.ArbitrarilyAll in [0,1] interval range, especially, when without any stream by the link, its congestion coefficient is 0, and works as When the link is fully used, its congestion coefficient is 1.
Define 2:The congestion coefficient of one paths P is
There is this definition, we can lock in path and congested node is obtained, and after judging whether certain paths can be competent at Continuous flow transformation task.
Define 3:SBF problem (scheduling batched flows) is data-oriented central site network G (V, E) and needs The flow F of transmission, the target of bulk stream scheduling is to find a kind of rational path allocation scheme φ*So thatMinimize.
Herein SBF problem is modeled as follows:
Minimize Z
bmin/cmax≤Z≤bmax/cmin
In above-mentioned model, i is an integer between 0 and δ, and in (v) and out (v) represent in VLCcube respectively The set of the flow for flowing in and out on node v, tfRepresented is the size for flowing f.First three formula in model ensure that Each stream is only transmitted on one path;4th formula determines the bound of object function Z, wherein bmaxAnd bminRespectively Represented is the maximin for flowing size, correspondingly, cmaxAnd cminRespectively represent VLCcube in link capacity maximum most Little value;5th formula is determined and can not be more than Z per an article upper bound for link congestion coefficient, wherein,Characterize whether e is flowed fiAccount for With, if it is,Value is 1, otherwise value is 0.
SBF problem is a typical integral linear programming problem, and np hard problem, it is impossible to ask in polynomial time again Must solve.For this purpose, designing a kind of algorithm of lightweight herein to solve feasible solution.For any fi∈ F, we search out Three kinds of paths present in VLCcube, and be designated asIt is true thatComprising k2/ 4 wireline pathway, and wirelessly Each with mixed path one.In order to the stream scheduling scheme of F is solved, a kind of heuristic calculation based on congestion coefficient is devised herein Method.
Define 4:Given flow set F, per bar stream fiIts set of paths is all calculatedThe congestion coefficient of e ∈ E, note Make le, be in F in the path candidate of all streams through the link number of path summation.
Define 5:For free routingIts congestion coefficient, is denoted as lP, it is the gathering around of all-links on the path Sum, the i.e. l of plug coefficientP=∑e∈Ple.
Substantially, the congestion coefficient of link e and path P can characterize the probability which is taken by a plurality of stream.Therefore, we are by lP As judging f in heuritic approachiWhether the Main Basiss of P should be taken.Specifically, for fi, heuritic approach is from its institute There is path candidateThe middle path for choosing congestion coefficient minimum is used as its transmission path.
According to the definition of congestion coefficient, algorithm 1 gives the basic thought of the greedy algorithm of our designs.For each Stream, we search out which firstBar path candidate, then calculates the congestion coefficient of each link in VLCcube. Afterwards, return to per on bar stream, for arbitrarily stream fi, calculate the congestion coefficient of its all path candidate.FromIn select and gather around Plug coefficient reckling is used as fiTransmission path.Time complexity of this algorithm in the path candidate stage for searching for all streams is O (δ*(k2+ k+4)), it is O (δ * (k in the time complexity of screening pipeline stage2/4+2)).So, in general, the algorithm Time complexity is O (δ * k2).
What the congestion coefficient of link e was represented is at most have leBar stream is linked using this;Similarly, the congestion system of path P Number means at most there is lPBar stream passes through the path.If not taking scheduling strategy, free routingThere is equalization Probability by fiChoose as its transmission path.According to this observation, theorem 4,5,6 is demonstrated with path congestion coefficient as greedy The correctness of center algorithm screening foundation.
Theorem 4:In VLCcube, stream f is giveni∈ F, e are any one links in network, then e is by fiUsed is general Rate is:
Wherein, FeThe set for expressing possibility using the stream for linking e,It is to link by stream f on eiAnd the congestion coefficient for causing, Because there may be more than one fiPath candidate through the link situation.
Prove:For arbitrary stream fi∈Fe, if there areInBar path candidate through e, thenOtherwise, fiWould be impossible to using link e.
Theorem 5:In VLCcube, for arbitrary stream fi∈Fe, η represents in scheduling scheme Φ, by linking the stream of e Quantity, then have:
Prove:Given flow set F, these streams using link e be whether separate, therefore,WithCan just calculate, and correspondingCan also calculate.
Theorem 6:A stream f in consideration Fi, η represents through pathStream quantity, and E (P) is path The set of the link composition on P, for any P, has:
Prove:ForIn paths P, a η=0 represent there is no any one stream using the path, and η=1 item Represent the path by fiUsing or P at least one link by remove fiOutside other stream use.It is consequently possible to calculate going outAndProbability.
According to the conclusion of theorem 4, theorem 5,6 calculate do not flow or only one stream using link e and path P general Rate.It should be noted that when η >=2, link e and path P are likely to occur congestion, because the transmission time of previous bar stream may Understand the long stream that can cause later overtime and packet loss.Theorem 5 is shown with theorem 6, leValue bigger, be more possible to occur two with On stream using link e or path P situation (get over be possible to congestion occur).Therefore, path P occur the probability of congestion with lPValue be directly proportional.So, just it is able to using the congestion coefficient in path as the correctness of the foundation in screening path in algorithm 1 Prove.As the greedy algorithm for proposing selects the path transmission flow of congestion coefficient minimum, the network congestion rate of VLCcube will Greatly reduce.
However, the flow inside data center is not necessarily all batch, in fact, flow is typically sequence and reaches and be in State.Φ0Represent the stream scheduling strategy being presently in existence, FNRepresent newly arrived stream, FORepresenting needs to retransmit due to network reason Old stream.Accordingly, it would be desirable to the flow set that dispatches is combined into F1=FN+FO.With F1As input, our defined nucleotide sequence stream scheduling problems As follows:
Define 6:SOF (scheduling online flows) problem, i.e. the target of dynamic stream scheduling problem is to obtain newly Scheduling scheme Φ1So that transmitting the increment minimum of the congestion ratio that new flow brings.Allow Δ Z=Z1-Z0, whereinAndTarget seeks to minimize Δ Z.
SOF problem can be triggered when new stream is reached or some existing streams need and retransmit.SOF problem is equally One integral linear programming problem similar to SBF, for the consideration of length, there is omitted herein the particular content of its model.
SOF problem needs to minimize Z1, so the strategy in algorithm 2 can be equally used for solving the problem, however, due to SOF problem often may be triggered, and cause the algorithm to be run multiple times and produce huge computing cost.For this purpose, we are only Consider those in F1In stream, and propose a kind of greedy algorithm to solve the problems, such as SOF.For F1In any one stream, we Basic idea is to allow which using making existing network increase that link of minimum congestion ratio as its transmission path.
As shown in algorithm 2, our Greedy strategy calculates the flow that scheduling scope included by those needs first, that is, Say, need to differentiate the stream that those have completed, the stream of newly arrived stream and bust thiss.The algorithm must be known by current which set Available during standby and link, it is therefore desirable to update the state of whole network.Afterwards, for F1In stream, this algorithm need search Go out three kinds of path candidates per bar stream.Calculating fiRear routing footpath after, algorithm can be according to biAnd ciValue calculate per bar The congestion ratio of path candidate.Subsequently fromIn select congestion ratio reckling as fiTransmission path.As all F1In stream After being all assigned to paths, algorithm can return the solving result S of SOF problemonline.Due to F1In one have δ1Bar stream, with Upper process will be performed δ1Secondary, and be required for each time spending O (k2+ k) time be used for search for stream path candidate, therefore, The time complexity of the algorithm is O (δ1*k2).
Theorem 7:For sequence flow, algorithm 2 is better than ECMP dispatching method.
Prove:For F1In any stream fi, when using ECMP dispatching method, fiBeing contemplated to be of congestion:
And conversely, using algorithm 2 when, fiBeing contemplated to be of congestion:
Obviously, the congestion that algorithm 2 causes must be demonstrate,proved not over the congestion that ECMP dispatching method causes, theorem.
Below the effect of sort method of the present invention is evaluated.
We achieve VLCcube and Fat-Tree with specialized network simulation software NS3 (Network Simulator). The value of given k, can obtain Fat-Tree structure, and VLCcube then can be obtained according to building method given above. The bandwidth of wired connection and short range wireless link in VLCcube is set to 10Gbps, and the bandwidth quilt of long range wireless link It is defined to 100Mbps.Re-transmission time (RTO, retransmission timeout) in network is fixed as 2 seconds.More than being based on Parameter setting, we compare the quality of the two topological aspect first, then compare wireline pathway, wireless path and mixed path three Plant the time complexity of routing algorithm.Finally, we weigh the network performance of the two emphatically.
Our experiment considers three kinds of different flow rate modes:1) Trace flow:Yahoo data centers are recorded Flow;2):Stride-i flow:The server for being numbered x in network sends data to the server for being numbered (x+i) mod N Bag, wherein N is the total quantity of server in network;3):Random flow:Source and destination per bar stream is all randomly selected 's.Network throughput and packet loss are used to weigh performance of the network under different flow pattern.
In order to verify it is proposed that dispatching algorithm performance, we compare VLCcube first and Fat-Tree is used Network performance during ECMP dispatching method, then compares the stream dispatching algorithm of ECMP and congestion aware again to VLCcube internetworking The impact that can bring.It should be noted that the time of advent of sequence flow obeys Poisson distribution.
In order to compare the quality of VLCcube and Fat-Tree in topological aspect, we measure the average road of two kinds of networks Electrical path length and network total bandwidth.As shown in Figure 10 is with Figure 11, for comparing Fat-Tree, VLCcube can provide more networks band Width, and have shorter average path length.The reason for causing these advantages is that to introduce extra VLC in VLCcube wireless Link.Meanwhile, it is observed that impact of the VLC wireless link to network average path for introducing present marginal successively decrease become Gesture.That is, when network size is compared with hour, VLC wireless link can significantly more reduce average path length.In fact, The value of given k, has k in VLCcube2Bar VLC wireless link, and in network, wired and wireless link sum is k3/2+k2.With The increase of k, VLC wireless link accounts for the ratio of total link number and is gradually reduced, so as to cause the appearance of above-mentioned border effect.
We are performed a plurality of times VLCcube construction method, and select wherein optimum VLCcube constructing plan.For ease of Relatively, the connectedness (quantity on side in pod layer logic chart) of the pod layer logic chart of the VLCcube for being generated is by corresponding complete Figure normalization.In fig. 12, VLCcube1, VLCcube2 and VLCcube10 represent execution VLCcube construction method 1 time respectively, The connectedness of the pod layer logic chart of the VLCcube structure of gained when 2 times and 10 times.Obviously, with the increase pod layer logic chart of k Connectedness successively decrease, and execute construction method number of times more, the structure for obtaining is also more superior, because more likely obtaining relatively Good frame placement schemes.
Our also further comparison search wireline pathway, the time complexity of the routing algorithm of wireless path and mixed path Degree.Figure 13 have recorded the time overhead of three kinds of algorithms, it can be seen that with the increase of network size, the route of mixed path is calculated The time loss of method is continuously increased and bigger than consuming both other.And the time overhead of wireless path routing algorithm also has not The disconnected trend for increasing, increases 0.575 millisecond from 0.2 millisecond.It should be noted that the time of wireline pathway routing algorithm opens Pin is minimum and is stably maintained at very low level, i.e., 0.09 millisecond.So generally, the time of wireline pathway routing algorithm Complexity is constant, and the complexity of other two kinds of routing algorithms respectively with k and k2It is directly proportional.
Therefore, according to result above, can be concluded that VLCcube can provide more networks bandwidth, and have shorter average Path, that is to say, that with more preferable topology performance.
We also compares network throughput when VLCcube and Fat-Tree adopts ECMP and packet loss.In difference Flow rate mode under, increase to 60 by the value for changing k from 6 to adjust network size, and observe and record network throughput with Packet loss.In addition, our experiment also controls mean flow size to disclose stream shadow of the size to network performance from 5Mb to 300Mb Ring.But cannot be changed based on the stream size of true Trace, be determined by the data of Trace.In testing every time, network The handling capacity that handling capacity k=60 in VLCcube is or the handling capacity normalization when mean flow size is 300Mb.
For Trace flow, the Trace of Yahoo used herein have recorded its 6 distributive data center for a period of time The interior essential information per bar stream, including source and the IP address of destination server, flows size and its port numbers used etc.. The port numbers used by recognizing stream, it can be determined that the flow is certain data center's inner stream flow or the stream across data center Amount.Afterwards, we are injected separately into the k for randomly selecting in an experiment to VLCcube and Fat-Tree3Bar stream is to assess their property Energy.
Figure 14 and Figure 15 have recorded VLCcube and Fat-Tree to be increased to during 60 from 6 in the value of k, in Trace flow Under handling capacity and packet loss.As a result show, for comparing Fat-Tree, VLCcube can provide the handling capacity more than 8.5%, And reduce by 39% packet loss.Its profound cause is that VLCcube introduces wireless link so that have more paths per bar stream Optional.
For Stride-2k flow, in the case that given stream mean size is 150Mb, we make the value of k increase from 6 To 60, as injection k3During bar stream, network throughput and packet loss is recorded, Figure 16 respectively show experimental result with Figure 17.With The increase of k, VLCcube and Fat-Tree can transmit greater number of stream, and therefore their handling capacity all constantly increases.So And, on average, VLCcube can provide 15.14% more handling capacities than Fat-Tree, and packet loss is also less.
Meanwhile, for impact of the measurement stream size to performance, we fix k=36, and the mean size for flowing increases from 50Mb 300Mb is added to, and k is injected in network3Bar stream.As shown in Figure 18 is with Figure 19, VLCcube is still better than Fat-Tree.Specifically For, or even when the mean size of stream is 150Mb, VLCcube has 14.31% more handling capacities, and packet loss is also little Much.
For stochastic-flow pattern, the source of stream and destination server are randomly selected, and similarly, are injected in network K3Bar stream.
First, the mean size that we fix stream is 150Mb, and determines that the value of the k of network size increases to 60 from 6.Such as Shown in Figure 20, the handling capacity of VLCcube and Fat-Tree is all sharply increased, and on average, VLCcube is still more superior, than Many 10.44% handling capacities of Fat-Tree.Figure 21 shows, after k >=18, Fat-Tree is subjected to very high packet loss always, And the packet loss in VLCcube maintains low-level always.Specifically, the average packet loss of VLCcube and Fat-Tree It is 0.27% and 2.45% respectively.
The impact of our further stream of measurements sizes fixing k=36.Figure 22 is shown with Figure 23, with input stream big Little increase, in network, handling capacity also constantly rises.At the same time, packet loss constantly rises.However, VLCcube is still maintained Larger advantage, for comparing Fat-Tree.Relatively low packet loss is kept while having higher throughput.
Therefore, it is experimentally confirmed, when all using ECMP dispatching algorithm, even if under different Model of network traffic, VLCcube is more advantageous compared with Fat-Tree on network performance.
Although above experiment has fully proved VLCcube better than Fat-Tree, in fact, when using ECMP, The topological advantage of VLCcube is not excavated completely.Therefore, in addition we also have evaluated VLCcube using congestion sense Performance during the stream dispatching algorithm that knows.
First, when k increases to during 42 from 6, we introduce k in VLCcube3The stochastic-flow of bar batch.Real Handling capacity obtained by testing is by being normalized using handling capacity during ECMP dispatching method.As Figure 24 and Figure 25 institute Show, while ECMP makes network obtain less handling capacity, also allow network be limited by bigger packet loss.And conversely, we carry The SBF algorithm for mass flow for going out, can realize 1.54 times of handling capacities, and network packet loss rate is also very low (especially, in k From 6 increase to 18 during reduce obvious).Its basic reason is that SBF provides more path candidates for flow, so as to flowing Amount is disperseed in VLCcube as far as possible.
In addition, VLCcube with it is proposed that SOF algorithm come schedule sequences flow.In an experiment, the value of k is set by we Meter increases between 24 6, and equally injects k3Bar sequence stochastic-flow.It is set as the time of advent of stream obeying parameter for λ's Poisson distribution.Figure 26 and Figure 27 have recorded experimental result.The ECMP-x of in figure and SOF-x are represented respectively as λ=x using ECMP With result during SOF dispatching method.Experimental result shows, when λ=2 and λ=4, SOF can obtain 2.22 times and 5.56 than ECMP Times handling capacity, and packet loss respectively only be.It was noticed that when λ=4, ECMP and SOF are Better than result during λ=2.Its reason is, with the increase of λ, to flow the time relative distribution for reaching, while the flow ratio for reaching is relatively Few, so as to cause the probability for congestion occur less, and packet loss just corresponding reduction.
Therefore, SBF and SOF can effectively improve the network performance for improving VLCcube, for comparing ECMP, can reduce Packet loss.
In sum, by means of the technique scheme of the present invention, this paper presents one kind is easy to deployment, dynamical new Type data center structure VLCcube.In order to lift existing cable data central site network Fat-Tree, herein using rising Visible light communication technology frame interconnection networking is become wireless Torus structure, the visible ray link of introducing can effectively reduce networking Average path length simultaneously improves the network bandwidth.Meanwhile, it is also directed to batch herein and two kinds of flow rate modes of sequence has separately designed stream The stream scheduling method that amount is perceived, makes full use of the advantage of VLCcube with this.By using setting up wireless 2D-Torus structure networking Be coupled with existing wired Fat-Tree rack construction and mixed structure made as the technological means of an overall work, not Changing the existing device at cable data center being wirelessly connected across frame for any control is not needed with setting up on the premise of layout, Cable data center is significantly extended under little cost, and lifts network flexibility.Meanwhile, using the data flow of arrival is split as Bulk stream data stream, first dispatches the technological means that bulk stream dispatches data flow again, improves the network of mixed structure further Performance.Experimental evaluation result shows, VLCcube is significantly better than Fat-Tree, and set forth herein stream dispatching algorithm fully can carry The performance of high VLCcube.
Those of ordinary skill in the art should be understood:The specific embodiment of the present invention is the foregoing is only, and It is not used in the restriction present invention, all any modification, equivalent substitution and improvement that within the spirit and principles in the present invention, is done etc., all Should be included within protection scope of the present invention.

Claims (8)

1. a kind of blended data central site network stream scheduling method based on congestion coefficient, it is characterised in that include:
Determine the wired Fat-Tree structure in blended data central site network system between multiple frames and wireless 2D-Torus structure In link, and the route arbitrarily between frame pair;
The data flow a plurality of to be implanted of Batch Arrival is obtained, and specifies a congestion coefficient sum minimum per bar bulk stream for described Transmission path;
The data flow a plurality of to be implanted of sequence arrival is persistently obtained, and specifies a congestion coefficient increment per bar sequence flows for described The transmission path of sum minimum.
2. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 1, its feature Being, is that the transmission path that a congestion coefficient sum minimum is specified per bar bulk stream includes:
Specifying per bulk stream described in bar successively, which is searched in by wired Fat-Tree structure and wireless 2D-Torus structure Link constitute all path candidates;
Calculate the congestion coefficient of each link respectively, and calculated according to the congestion coefficient of each link each The congestion coefficient of path candidate described in bar;
Choose the transmission path of the path candidate as the given batch size stream of congestion coefficient minimum.
3. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 2, its feature It is, only transmit on a path candidate per bulk stream described in bar, the maximum of the congestion coefficient of the link is no more than The congestion coefficient of the path candidate, wherein, the congestion coefficient of the link represents the stream quantity maximum using this link, The congestion coefficient of the path candidate represents the stream quantity maximum through this paths.
4. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 3, its feature It is, searches for all candidate roads that its linking in by wired Fat-Tree structure and wireless 2D-Torus structure is constituted Footpath, including k2/ 4 finite paths, a wireless path and a mixed path, wherein k is the wireless 2D-Torus structure In pod layer number.
5. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 1, its feature Being, is that the transmission path that a congestion coefficient increment sum minimum is specified per bar sequence flows includes:
The old stream for obtaining the new stream for reaching and needing to retransmit, the new stream of the arrival is needed to adjust with the old stream positioning for needing to retransmit The sequence flows of degree;
The congestion coefficient for linking in wired Fat-Tree structure and wireless 2D-Torus structure is updated per bar;
Specifying per sequence flows described in bar successively, which is searched in by wired Fat-Tree structure and wireless 2D-Torus structure Link constitute all path candidates;
Calculate the congestion ratio of each link respectively, and each institute is calculated according to the congestion ratio of each link The congestion ratio of path candidate is stated, wherein, the congestion ratio is the increment of congestion coefficient;
Choose the transmission path of the path candidate as the given batch size stream of congestion ratio minimum.
6. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 5, its feature It is, only transmit on a path candidate per sequence flows described in bar, the maximum of the congestion coefficient of the link is no more than The congestion coefficient of the path candidate, wherein, the congestion coefficient of the link represents the stream quantity maximum using this link, The congestion coefficient of the path candidate represents the stream quantity maximum through this paths.
7. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 1, its feature It is, determines that the route in blended data central site network system between any frame pair includes:
K pod is set in the plurality of frame, and each frame is put under k pod layer of formation in pod;
Pod layer logic chart is identified and is built according to pod Layer assignment for each frame described;
Calculate starting path P ath of the frame place pod to target frame place pod on the pod layer logic charthp
Select PathhpIn each wireless connect so that the PathhpRadio connection paths most short, and obtain in frame On path P athht
Add polymerization layer switch and the wired connection for needing is added the PathhtIn, obtain route Pathh.
8. a kind of blended data central site network stream scheduling method based on congestion coefficient according to claim 7, its feature It is, the mark includes prefix and mark suffix is identified, is that each frame described is identified according to pod Layer assignment and builds pod Layer logic chart includes:
For any x ∈ [0, k], randomly select k/2 and no prefix frame is identified, the mark prefix for being selected frame is put For x, and ensure the mark prefix difference of the adjacent frame of any two;
The mark suffix is set for each frame, the mark suffix span is 0 to k/2-1, marks described in any two Know the mark suffix difference of prefix identical frame;
Calculate the connectedness of pod layer logic chart under current identification;
Repeating above-mentioned steps repeatedly, chooses connective maximum mark allocative decision and result is generated as pod layer logic chart.
CN201611047098.1A 2015-12-30 2016-11-21 A kind of blended data central site network stream scheduling method based on congestion coefficient Active CN106453084B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511023066 2015-12-30
CN2015110230663 2015-12-30

Publications (2)

Publication Number Publication Date
CN106453084A true CN106453084A (en) 2017-02-22
CN106453084B CN106453084B (en) 2017-07-07

Family

ID=58219462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611047098.1A Active CN106453084B (en) 2015-12-30 2016-11-21 A kind of blended data central site network stream scheduling method based on congestion coefficient

Country Status (1)

Country Link
CN (1) CN106453084B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302711A (en) * 2018-08-24 2019-02-01 西安电子科技大学 The energy-efficient deployment method of restructural Fat-Tree blended data central site network
CN115086185A (en) * 2022-06-10 2022-09-20 清华大学深圳国际研究生院 Data center network system and data center transmission method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075394A (en) * 2011-01-14 2011-05-25 清华大学 P2i interconnecting structure-based data center
WO2013156903A1 (en) * 2012-04-20 2013-10-24 Telefonaktiebolaget L M Ericsson (Publ) Selecting between equal cost shortest paths in a 802.1aq network using split tiebreakers
WO2014198053A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Fault tolerant and load balanced routing
CN104767694A (en) * 2015-04-08 2015-07-08 大连理工大学 Data stream forwarding method facing Fat-Tree data center network architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075394A (en) * 2011-01-14 2011-05-25 清华大学 P2i interconnecting structure-based data center
WO2013156903A1 (en) * 2012-04-20 2013-10-24 Telefonaktiebolaget L M Ericsson (Publ) Selecting between equal cost shortest paths in a 802.1aq network using split tiebreakers
WO2014198053A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Fault tolerant and load balanced routing
CN104767694A (en) * 2015-04-08 2015-07-08 大连理工大学 Data stream forwarding method facing Fat-Tree data center network architecture

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302711A (en) * 2018-08-24 2019-02-01 西安电子科技大学 The energy-efficient deployment method of restructural Fat-Tree blended data central site network
CN115086185A (en) * 2022-06-10 2022-09-20 清华大学深圳国际研究生院 Data center network system and data center transmission method
CN115086185B (en) * 2022-06-10 2024-04-02 清华大学深圳国际研究生院 Data center network system and data center transmission method

Also Published As

Publication number Publication date
CN106453084B (en) 2017-07-07

Similar Documents

Publication Publication Date Title
US6882627B2 (en) Methods and apparatus for selecting multiple paths taking into account shared risk
CN105634953B (en) A kind of networking of blended data center and method for routing based on visible light communication
CN103051565B (en) A kind of architecture system and implementation method of grade software defined network controller
CN102124704B (en) Across numeral and the link-diversity of optics pass-thru node and load balance
KR101548695B1 (en) Apparatus and method for topology design of hybrid optical networks on chip
CN106165492B (en) Distributed Route Selection in wireless network
US20180375718A1 (en) Technique for topology aware network device upgrades
CN107005480A (en) The system and method cooperated for SDT and NFV and SDN
CN105515987B (en) A kind of mapping method based on SDN framework Virtual optical-fiber networks
CA2490075A1 (en) Integrated wireless distribution and mesh backhaul networks
CN104380672A (en) Three stage folded clos optimization for 802.1aq
CN108028802A (en) Self-organizing mesh network is built using 802.11AD technologies
CN105141524A (en) Topological graph optimal route algorithm with constraint conditions
CN108111411A (en) Backbone network and its active path planning system and planing method
CN106453084B (en) A kind of blended data central site network stream scheduling method based on congestion coefficient
CN104618130A (en) Minimum cost synchronization method of data center network controller by using software
CN107147530A (en) A kind of virtual network method for reconfiguration based on resource conservation
CN105472484A (en) Wave channel balancing route wavelength allocation method of power backbone optical transport network
CN108966053A (en) A kind of cross-domain route computing method of multiple-domain network dynamic domain sequence and device
CN103281708A (en) Wireless sensor node deploying method
CN106713138A (en) Cross-domain transmission method and apparatus of streaming data
CN101330411B (en) Method and system for simulating large-scale network topological
US8472347B2 (en) System and method for providing network resiliency
CN106911521B (en) Based on polycyclic network on mating plate Topology Structure Design method
US7590067B2 (en) Method and apparatus for deriving allowable paths through a network with intransitivity constraints

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant