CN103701916B - The dynamic load balancing method of distributed memory system - Google Patents
The dynamic load balancing method of distributed memory system Download PDFInfo
- Publication number
- CN103701916B CN103701916B CN201310749353.7A CN201310749353A CN103701916B CN 103701916 B CN103701916 B CN 103701916B CN 201310749353 A CN201310749353 A CN 201310749353A CN 103701916 B CN103701916 B CN 103701916B
- Authority
- CN
- China
- Prior art keywords
- node
- data
- client
- access
- data unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention discloses a kind of dynamic load balancing method of distributed memory system, and this method provides the storage service of s data unit for the unshared type distributed memory system of n memory node for the client-server of m node, and it comprises the following steps:Step 1:Count following information:Count and the data access that each client is connected is accessed on each memory node, each data unit in s data unit of statistics counts the space utilization rate of each node by the access times of different memory nodes;Step 2:According to the above-mentioned statistics in step 1, pre-establish bandwidth threshold, remotely access threshold value and capacity threshold, all threshold values are percentage, judging each node of distributed memory system, whether network load occur unbalanced, high latency and capacity are extremely unbalanced caused by cross-node access times are excessive, and select migrating data according to judged result or pass through redirection of router client access point.
Description
Technical field
The present invention relates to distributed memory system, and in particular to the dynamic load leveling side of asymmetric distribution formula storage system
Method.
Background technology
With the development of cloud computing technology, the quantity of the calculation server in cloud data center is more and more, preservation processing
Data volume it is more and more, the load of storage is increasingly weighed.Traditional distributed memory system such as SAN (storage area network) and individually
NAS (network attached storage) had no idea easily to tackle cloud computing and big data to memory capacity, memory bandwidth
Demand.Diversified distributed memory system, shared distributed memory system and unshared type point are generated therewith
Cloth storage system is exactly wherein main two kinds of distributed memory systems.Various distributed memory system is usually will
Solve two problems:1. the expansion of capacity, the expansion of 2. performances.Current distributed memory system, all relies on hardware
On the basis of, a hardware cluster is realized, still, these group schemes divide according only to the single resource in distributed memory system
With task requests, and flexible configuration can not be realized as needed, cause the wasting of resources of the platform of some in cluster, reduce
The disposal ability of distributed memory system so that distributed memory system can not make correct Decision of Allocation.
Therefore, the patent of invention of Application No. 201010002264.2, discloses a kind of method for realizing load balancing, bears
Equalization server and group system are carried, wherein, realizing the method for load balancing mainly includes following process:Load balancing service
Device receives task requests using MINA point-to-point pattern, is sent out respectively according to load balancing and at least two child nodes
The various resource informations sent determine the child node of the processing task requests, and number is handled including according to the least resource of configuration
Algorithm and the various resource informations determine the child node for handling the task requests, specifically, including for it is described at least
Each child node in two child nodes sets node weights, is that each resource information in the various resource informations is set
Resource value and resource load weights, each child node is obtained to each child node in each described child node by following processing
Respective load value:Are done by product and is handled for the resource value and resource load weights of each resource information of the child node respectively
To the first end value, the first end value of various resource informations is added and obtains the second end value, will second end value and
The node weights of the child node do the load value that product processing obtains the child node, will be described each according to the rule being pre-configured with
Load value meets the child node that the regular child node is defined as handling the task requests in individual child node, wherein being pre-configured with
Rule to select the minimum child node of load value in each node;The task requests are sent to the son section determined
Point, wherein the resource information includes the information for the upper task requests being disposed.Although this method is to a certain extent
The flexible configuration and more effective load balancing of cluster are realized, but in its algorithm, each of child node is obtained in real time
The resource value and resource load weights of resource information simultaneously carry out real-time operation, and algorithm is complicated, computationally intensive, to load-balancing device
Capability Requirement is very high, meanwhile, when client carry out data access when, it is necessary to wait time too long.
Such as patent of invention of Application No. 201110418127.1 again, disclose a kind of SiteServer LBS, device and
Method, methods described includes:The network switch is used to obtain the characteristic information in customer traffic, when the characteristic information
When meeting the first predetermined load balancing, by the data message in the customer traffic received by first load balancing
Strategy is transmitted to multiple server units for being connected with the network switch to realize low layer load balancing.When feature letter
When breath meets the second predetermined load balancing, the data message received is sent to the load-balancing device, Ran Hougen
The forwarding strategy determined according to the load-balancing device gives data message forwarding multiple clothes of the network switch connection
Device unit is engaged in realize high-rise load balancing.Wherein, the first load balancing is by the characteristic information with big bandwidth feature
Data message forwarding give particular server unit;Second load balancing is by the characteristic information with small bandwidth feature
Data message forwarding gives the load-balancing device.From the foregoing, it will be observed that the invention needs independent load-balancing device come on completing
State process;Meanwhile, the first load balancing and the second load balancing are respectively necessary for marking big bandwidth feature and small bandwidth
The characteristic information of feature, for general data message, its big bandwidth feature and small bandwidth feature distinguish be not it is obvious that because
This, for the method that the invention is provided is on implement, with certain difficulty;In addition, the method for foregoing invention is only examined
Bandwidth problem is considered, not in view of delay issue, with certain defect.In addition, several load balancing plans of the above
Slightly all it is that the arbitrator of load balancing is played the part of by a central node.This centralized load balancing is not suitable for extensive
Group system, the load balancing arbitration equipment in the first order be load-balancing device in a Single Point of Faliure, its secondary with
Increasing for leading portion equipment and rear end equipment number, the increasing of load is very possible to turn into the bottleneck of whole system.
The content of the invention
Therefore, for it is above-mentioned the problem of, the present invention proposes that a kind of dynamic load of unshared type distributed memory system is equal
Weighing apparatus method, making the performance of unshared type distributed memory system, dynamic is adjusted with the change of load, and takes into full account bandwidth,
Delay and the equalization problem of capacity, to maximize the bandwidth of performance distributed memory system and ensure relatively low delay, simultaneously
This method is simply easily realized, so as to solve the deficiency of prior art.
Compared with shared (centralized) distributed memory system, unshared type (decentralized) is distributed
All memory nodes are full symmetric in storage system, can accomplish that all memory nodes can serve as accessing points directly right
Client provides access (I/O) service of storage.It can so seem, the performance of unshared type distributed memory system can be with
With the increase of memory node, approximately linear increases.Compared to more traditional server load balancing method, load of the invention
Equalization methods are the system architectures that can apply to asymmetric distribution formula storage system and this load, and the load balancing side
Method has considered the bandwidth of network and the delay of storage and the balance of capacity, can farthest play distributed storage
The polymerization of system.
Method in order to better illustrate the present invention, it is assumed that have the unshared type distributed storage of a n memory node
System provides the storage service of s data unit for the client-server of m node.The load balancing of distributed memory system
Method needs to count flow on each node.The load of bandwidth is weighed with network traffics.If multiple client is simultaneously
Connect distributed memory system, according to dynamic load balancing method, first according to uniform wheel Flow Policy plus statistical result come will be many
Individual client is routed on different memory nodes.When coming a client connection request, dynamic load balancing method is needed
To start sequential query bandwidth statistics since the vernier (cursor) of last time record, find first less than the wide node of filled band,
By client connection route to this node.
According to the characteristics of distributed memory system, in most cases each client is directly linked at any time
In one accessing points of distributed memory system, then stored when the lucky no longer accessing points of data that client is desired access to
When, distributed memory system is would have to as accessing points on behalf of on the destination node where forwarding this request of data to data
Do the access service of data.The delay that the increased client data of data access of this cross-node is accessed, so entirely bearing
Carrying will avoid the operation of cross-node to reduce delay as far as possible in balance policy.Give tacit consent under initial situation, distributed memory system
Using nearby principle, new data are created on the connected reference point of client, in most cases, user is deposited by one
Node establishment file is stored up, having very big probability, he can access storage, and very big machine using this storage session connection always
The client that rate creates data is exactly the client of processing data from now on.So at the beginning, all clients are substantially all
Cross-node is not needed to access.
Condition more than having and it is assumed that specifically, the dynamic load balancing method of the distributed memory system of the present invention,
The unshared type distributed memory system that this method is used for n memory node provides s numbers for the client-server of m node
According to the storage service of unit, comprise the following steps:
Step 1:Count following data:
a:Data access to each client (in m client each) connection on each memory node is accessed
Take statistics:Count each client data access service the life of (local data access) can be directly met by accessing points
Middle number of times LocalAccess, and count the data access service of each client and must be expired by the node in addition to accessing points
The number of times RemoteAccess of foot (access for the destination node that cross-node is accessed)[n-1];[n-1] is RemoteAccess subscript
Value, is the sequence number of other nodes in addition to accessing points.For each accessing points, it is remote that n-1 other nodes are had here
Journey is accessed;
b:Each data unit counted in units of file or block in s data unit is visited by different memory nodes
The number of times asked:The data unit is directly accessed the number of times LocalHit of (local IP access hit) by client, and statistics for
Other nodes in each cluster, pass through the number of times RemoteHit of this data unit of these node visits[n-1];[n-1] is
RemoteHit subscript value, is the sequence number of other nodes in addition to accessing points.For each data unit, pass through data
The value that node where unit is directly accessed is exactly LocalHit;And in addition to data unit, by other accessing points come
It is exactly RemoteHit to access this data unit[n-1], it is remote access that n-1 node is had here.
c:Count the space utilization rate of each node;
Step 2:According to above-mentioned statistics, and pre-establish bandwidth threshold, remote access threshold value and capacity threshold, each threshold
Value is percentage;If capacity highest threshold value HighWaterMark and lowest threshold LowWaterMark;Judge distributed storage
Whether each node of system capacity occurs, and extremely unbalanced (space of a memory node is closely full, and the storage of other node
Space is approximately empty), the capacity of one of node has used more than HighWaterMark, and other nodes are less than
LowWaterMark (HighWaterMark, LowWaterMark are relative percentages), if it is goes to step 31;
Judge that some client cross-node accesses the number of times RemoteAccess of data[nodeX]=K1 is more than local IP access
Whether number LocalAccess=K2 reaches a default threshold value Z, i.e. K1-K2>Z, if it is goes to step 32;
Judge the number of times that a data unit (file or data block) is accessed by some node cross-node
RemoteHit[nodeY]=L1 is more than the threshold value W, i.e. L1-L2 whether local IP access number of times LocalHit=L2 reaches a setting
>W, if it is goes to step 33;
Step 31:Migrating data:Target is determined by DHT (distributed hash) algorithm first for selected data unit
Node nodeDst, if this data unit is much smaller than by wherein some node nodeY's by nodeDst access times
Access times, then destination node selects nodeY;If nodeY is exactly local node, then temporarily do not migrate notebook data list
Position;
Step 32:Choose that most node of client cross-node access times be RemoteAccess arrays intermediate value most
Big that subscript n odeX, by redirection of router client to this node nodeX, if this destination node nodeX band
It is wide that width is constantly in filled band, and there have other node bandwidths statistics to be not reaching to filled band to be wide, then does not migrate temporarily, first attempts to touch
Send out data balancing;Client acess control is removed simultaneously;If all of all approximate bandwidth saturation of node, pass through redirection of router
Client's access point is to destination node;
Step 33:Migrating data is to destination node nodeY.
The means of load balancing typically have two kinds:One kind is migrating data, and the equilibrium of data is the premise of balanced bandwidth.With
Distributed hashing table algorithm is redistributed to data unit.Data are distributed in frequent accessing points as far as possible, and delay is reduced with this.
Another is by redirection of router client access point.The present invention is by using the above method, and it has merged access road
Footpath is balanced and data load-balancing method in a balanced way.Its method is simply easily achieved, and whole distributed memory system is according to such as
Upper described balance policy, the equilibrium of distributed memory system load is reached with delay as small as possible.
Embodiment
In conjunction with embodiment, the present invention is further described.
In the measurement index of distributed memory system performance most importantly:Bandwidth and delay.The two performance indications exist
Interact and interact in the case of certain.For a unshared type distributed memory system, the scattered storage of data
In on multiple memory nodes, if any one node can serve as data access point, then mean when user is logical
When crossing a node visit data, it is possible to which the data that he is desired access at this time are used as on other memory nodes
The node of mouthful point, which must be represented, gets data in accessing points on the destination node where client from data, then returns data
Back to client, then what this cross-node access certainly will be brought is, by higher delay, to be accessed relatively in Data Concurrent
In the case of few, high delay limits bandwidth.So in order to reduce delay and improve bandwidth whether route all clients
The destination node where accessing data is all directly gone just to reduce delay, the performance reachedConclusion is not, because with
The number clients purpose for accessing same group of data increases, and the change of concurrent visit capacity is big, the bandwidth one of whole distributed memory system
Surely it is limited to the bandwidth of that memory node.So shared distributed memory system is desirable to bear by a kind of senior
Balance dispatching strategy is carried to maximize the bandwidth of performance distributed memory system and ensure relatively low delay.
This scheduling strategy is that one kind has merged access path equilibrium and data load-balancing method in a balanced way.Following strategy
Assuming that the unshared type distributed memory system for having a n memory node provides s numbers for the client-server of m node
According to the storage service of unit.
The load-balancing method of distributed memory system needs to count flow on each node.Weighed with network traffics
Measure the load of bandwidth.If multiple client connects distributed memory system simultaneously, this strategy is first added according to uniform wheel Flow Policy
Multiple client is routed on different memory nodes by statistical result.It is balanced when coming a client connection request
Strategy needs the sequential query bandwidth statistics since the vernier (cursor) of last time record, finds first wide less than filled band
Node, by client connection route to this node.
Distributed memory system uses nearby principle, new data is created on the connected reference point of client, most
In the case of number, user is by a memory node establishment file, and having very big probability, he can be connected using this storage session always
Storage is asked in receiving, and very big probability create data client be exactly processing data from now on client.So being opened one
Begin, all clients, which are substantially all, does not need cross-node to access.
Following 3 statistics are made in this load-balancing method requirement simultaneously:
1:Connection to each client (in m client each) on each memory node takes statistics:Statistics
Each client data access service local data access hit-count LocalAccess, and count each across
The access times RemoteAccess of the destination node of node visit[n-1];[n-1] is RemoteAccess subscript value, is to remove
The sequence number of other nodes outside accessing points.For each accessing points, it is remote access that n-1 other nodes are had here;
2:The access times of each data unit in s data unit are counted in units of file or block:It is local to visit
Hit-count LocalHit is asked, and is counted for other nodes in each cluster, passes through this number of these node visits
According to the number of times RemoteHit of unit[n-1];[n-1] is RemoteHit subscript value, is the sequence of other nodes in addition to accessing points
Number.For each data unit, the value directly accessed by the node where data unit is exactly LocalHit;And except
Beyond data unit, it is exactly RemoteHit that this data unit is accessed by other accessing points[n-1], n-1 are had here
Node is remote access.
3:Count the space utilization rate of each node.
The means of load balancing have two kinds:
1. migrating data:The equilibrium of data is the premise of balanced bandwidth.Data unit weight is newly distributed with DHT algorithms.Number
According to being distributed in as far as possible in frequent accessing points, delay is reduced with this;
2. pass through redirection of router client access point.
Pre-establish capacity highest threshold value HighWaterMark and lowest threshold LowWaterMark, highest threshold value
HighWaterMark and lowest threshold LowWaterMark is percentage, judges whether each node of distributed memory system goes out
Existing capacity is extremely unbalanced, and selects migrating data according to judged result or pass through redirection of router client access
Point.Wherein, the condition of triggering load balancing has three:
Condition 1:Capacity is extremely unbalanced, and the usage amount of the capacity of one of node exceedes highest threshold value
HighWaterMark, other usage amounts for having the capacity of node are less than lowest threshold LowWaterMark;
Condition 2:The number of times for accessing some client cross-node data is more than one setting of local IP access number of times arrival
Threshold value Z;Some client cross-node is made to access the number of times RemoteAccess of data[nodeX]For K1, local IP access number of times is made
LocalAccess is K2, if K1-K2 is more than predetermined threshold value Z;
Condition 3:The number of times for making a data unit (file or data block) be accessed by some node cross-node
RemoteHit[nodeY]For L1, it is L2 to make local IP access number of times LocalHit, if L1-L2 is more than predetermined threshold value W;
Balanced decision rule:
When condition 1 is triggered, migrating data:Pass through DHT (distributed hash) algorithm first for selected data unit
Determine that destination node nodeDst carries out Data Migration, pass through if this data unit is much smaller than by nodeDst access times
Wherein some node nodeY access times, then destination node selects nodeY;If nodeY is exactly local node, then
The data unit is not migrated.
When condition 2 is triggered, it is RemoteAccess numbers to choose that most node of client cross-node access times
Maximum that subscript n odeX of class mean, by redirection of router client to this node nodeX, if this destination node
It is wide that nodeX bandwidth is constantly in filled band, and there have other node bandwidths statistics to be not reaching to filled band to be wide, then does not migrate temporarily,
First attempt to trigger data balanced;Client acess control is removed simultaneously;If all of all approximate bandwidth saturation of node, pass through
Redirection of router client access point is to destination node.
When condition 3 is triggered, migrating data to destination node nodeY.
Two kinds of means that the present invention combines load balancing (migrating data, pass through redirection of router client access
Point), in the different Data Migrating Strategy of different condition triggering selections.Load balancing of the prior art is general all by one
Individual central node plays the part of the arbitrator of load balancing, and this centralized load balancing is not suitable for large-scale cluster system
System, the load balancing arbitration equipment in the first order is load-balancing device in a Single Point of Faliure, its secondary as leading portion is set
Standby and rear end equipment number increases, and the increasing of load is very possible turns into the bottleneck of whole system.And the present invention provide it is upper
Balance policy is stated, large-scale group system is especially suitable for.Whole distributed memory system can be according to equilibrium as described above
Strategy, the equilibrium of distributed memory system load is reached with delay as small as possible.
Although specifically showing and describing the present invention with reference to preferred embodiment, those skilled in the art should be bright
In vain, do not departing from the spirit and scope of the present invention that appended claims are limited, in the form and details can be right
The present invention makes a variety of changes, and is protection scope of the present invention.
Claims (3)
1. the dynamic load balancing method of distributed memory system, the unshared type that this method is used for n memory node is distributed
Storage system provides the storage service of s data unit for the client-server of m node, comprises the following steps:
Step 1:The data access that each client is connected is accessed on each memory node and taken statistics, s data sheet is counted
Each data unit in position counts the space utilization rate of each node by the number of times of different storage node accesses;
Step 2:According to above-mentioned statistics, and bandwidth threshold, remote access threshold value and capacity threshold are pre-established, each threshold value is equal
For percentage;
If capacity highest threshold value HighWaterMark and lowest threshold LowWaterMark, judge each of distributed memory system
Node whether occur capacity extremely it is unbalanced, i.e., when judged result meet:The usage amount of the capacity of one of node exceedes most
High threshold HighWaterMark, the usage amount of the capacity of a node is less than lowest threshold LowWaterMark, then goes to step
31;
Some client cross-node nodeX is made to access the number of times RemoteAccess of data[nodeX]For K1, local IP access number of times is made
LocalAccess is K2, when judged result is met:K1-K2 is more than predetermined threshold value Z, then goes to step 32;
The number of times RemoteHit for making a data unit be accessed by some node nodeY cross-nodes[nodeY]For L1, order is local to visit
It is L2 to ask number of times LocalHit, when judged result is met:L1-L2 is more than predetermined threshold value W, then goes to step 33;
Step 31:Migrating data:Destination node nodeDst is determined by DHT algorithms first for selected data unit, if
This data unit is much smaller than by wherein some node nodeY access times by nodeDst access times, then target
Node selects nodeY;If nodeY is exactly local node, then temporarily do not migrate notebook data unit;
Step 32:The most node nodeX of client cross-node access times is chosen, if node nodeX bandwidth is always
It is wide in filled band, and there have other node bandwidth statistics to be not reaching to filled band to be wide, then do not migrate temporarily, first attempt to trigger data
Equilibrium, while removing client acess control;If all of all approximate bandwidth saturation of node, by redirection of router, client connects
Enter accessing points to destination node nodeX;
Step 33:Migrating data is to destination node nodeY.
2. the dynamic load balancing method of distributed memory system according to claim 1, it is characterised in that:The step
In 1, count and the data access that each client is connected is accessed on each memory node, be specifically:Count each client
Data access service the number of times LocalAccess that can be directly met by accessing points, and count the number of each client
The number of times RemoteAccess that must be met according to access service by the node in addition to accessing points[n-1];[n-1] is
RemoteAccess subscript value, is the sequence number of other nodes in addition to accessing points.
3. the dynamic load balancing method of distributed memory system according to claim 2, it is characterised in that:The step
In 1, each data unit in s data unit of statistics is specifically by the access times of different memory nodes:With file
Or block is that unit counts the number of times LocalHit that the data unit is directly accessed by client, and count for each cluster
In other nodes, pass through the number of times RemoteHit of this data unit of these node visits[n-1];[n-1] is RemoteHit
Subscript value, be the sequence number of other nodes in addition to accessing points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310749353.7A CN103701916B (en) | 2013-12-31 | 2013-12-31 | The dynamic load balancing method of distributed memory system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310749353.7A CN103701916B (en) | 2013-12-31 | 2013-12-31 | The dynamic load balancing method of distributed memory system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103701916A CN103701916A (en) | 2014-04-02 |
CN103701916B true CN103701916B (en) | 2017-10-27 |
Family
ID=50363310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310749353.7A Active CN103701916B (en) | 2013-12-31 | 2013-12-31 | The dynamic load balancing method of distributed memory system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103701916B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202434A (en) * | 2014-09-28 | 2014-12-10 | 北京奇虎科技有限公司 | Node access method and device |
CN104331253B (en) * | 2014-10-30 | 2017-12-15 | 浪潮电子信息产业股份有限公司 | The computational methods of object migration in a kind of object storage system |
CN105429766A (en) * | 2015-11-04 | 2016-03-23 | 上海科技网络通信有限公司 | Energy consumption optimization method of cloud computing data center |
CN105426129A (en) * | 2015-11-18 | 2016-03-23 | 上海新储集成电路有限公司 | Method for optimizing hybrid memory data storage |
CN105516328A (en) * | 2015-12-18 | 2016-04-20 | 浪潮(北京)电子信息产业有限公司 | Dynamic load balancing method and system, and devices used for distributed storage system |
CN106933868B (en) * | 2015-12-30 | 2020-04-24 | 阿里巴巴集团控股有限公司 | Method for adjusting data fragment distribution and data server |
CN105912612B (en) * | 2016-04-06 | 2019-04-05 | 中广天择传媒股份有限公司 | A kind of distributed file system and the data balancing location mode towards the system |
CN106294538B (en) | 2016-07-19 | 2019-07-16 | 浙江大华技术股份有限公司 | A kind of moving method and device from the data record in node |
EP3306896A1 (en) * | 2016-10-07 | 2018-04-11 | Nokia Technologies OY | Access to services provided by a distributed data storage system |
CN107547643B (en) * | 2017-08-29 | 2021-06-29 | 新华三技术有限公司 | Load sharing method and device |
CN108196788B (en) * | 2017-12-28 | 2021-05-07 | 新华三技术有限公司 | QoS index monitoring method, device and storage medium |
CN110244901B (en) * | 2018-03-07 | 2021-03-26 | 杭州海康威视系统技术有限公司 | Task allocation method and device and distributed storage system |
CN108924203B (en) * | 2018-06-25 | 2021-07-27 | 深圳市金蝶天燕云计算股份有限公司 | Data copy self-adaptive distribution method, distributed computing system and related equipment |
CN109359096A (en) * | 2018-09-14 | 2019-02-19 | 佛山科学技术学院 | A kind of digital asset secure sharing method and device based on the storage of block chain |
CN109669636B (en) * | 2018-12-20 | 2020-04-21 | 深圳领络科技有限公司 | Distributed intelligent storage system |
CN111506254B (en) * | 2019-01-31 | 2023-04-14 | 阿里巴巴集团控股有限公司 | Distributed storage system and management method and device thereof |
CN110471893B (en) * | 2019-08-20 | 2022-06-03 | 曾亮 | Method, system and device for sharing distributed storage space among multiple users |
CN111459914B (en) * | 2020-03-31 | 2023-09-05 | 北京金山云网络技术有限公司 | Optimization method and device of distributed graph database and electronic equipment |
CN112187738A (en) * | 2020-09-11 | 2021-01-05 | 中国银联股份有限公司 | Service data access control method, device and computer readable storage medium |
CN114827180B (en) * | 2022-06-22 | 2022-09-27 | 蒲惠智造科技股份有限公司 | Distribution method of cloud data distributed storage |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854299A (en) * | 2010-05-21 | 2010-10-06 | 中国科学院软件研究所 | Dynamic load balancing method of release/subscription system |
CN102136003A (en) * | 2011-03-25 | 2011-07-27 | 上海交通大学 | Large-scale distributed storage system |
CN102143215A (en) * | 2011-01-20 | 2011-08-03 | 中国人民解放军理工大学 | Network-based PB level cloud storage system and processing method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9047351B2 (en) * | 2010-04-12 | 2015-06-02 | Sandisk Enterprise Ip Llc | Cluster of processing nodes with distributed global flash memory using commodity server technology |
-
2013
- 2013-12-31 CN CN201310749353.7A patent/CN103701916B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854299A (en) * | 2010-05-21 | 2010-10-06 | 中国科学院软件研究所 | Dynamic load balancing method of release/subscription system |
CN102143215A (en) * | 2011-01-20 | 2011-08-03 | 中国人民解放军理工大学 | Network-based PB level cloud storage system and processing method thereof |
CN102136003A (en) * | 2011-03-25 | 2011-07-27 | 上海交通大学 | Large-scale distributed storage system |
Non-Patent Citations (1)
Title |
---|
P2P覆盖网拓扑优化技术研究;任浩;《中国博士学位论文全文数据库 信息科技辑》;20090715;I139-8 * |
Also Published As
Publication number | Publication date |
---|---|
CN103701916A (en) | 2014-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103701916B (en) | The dynamic load balancing method of distributed memory system | |
Long et al. | LABERIO: Dynamic load-balanced routing in OpenFlow-enabled networks | |
Hong et al. | Finishing flows quickly with preemptive scheduling | |
US6986139B1 (en) | Load balancing method and system based on estimated elongation rates | |
AU2015229200B2 (en) | Coordinated admission control for network-accessible block storage | |
US8087025B1 (en) | Workload placement among resource-on-demand systems | |
US7764615B2 (en) | Distributing rate limits and tracking rate consumption across members of a cluster | |
US20030236887A1 (en) | Cluster bandwidth management algorithms | |
US7117242B2 (en) | System and method for workload-aware request distribution in cluster-based network servers | |
CN107026907A (en) | A kind of load-balancing method, load equalizer and SiteServer LBS | |
MX2015006471A (en) | Method and apparatus for controlling utilization in a horizontally scaled software application. | |
CN102611735A (en) | Load balancing method and system of application services | |
CN108881348A (en) | Method for controlling quality of service, device and storage server | |
Xie et al. | Cutting long-tail latency of routing response in software defined networks | |
CN105022717A (en) | Network on chip resource arbitration method and arbitration unit of additional request number priority | |
CN103338252A (en) | Distributed database concurrence storage virtual request mechanism | |
CN106998340B (en) | Load balancing method and device for board resources | |
Liu et al. | Deadline guaranteed service for multi-tenant cloud storage | |
Rashid | Sorted-GFF: An efficient large flows placing mechanism in software defined network datacenter | |
Aghdai et al. | In-network congestion-aware load balancing at transport layer | |
Rikhtegar et al. | DeepRLB: A deep reinforcement learning‐based load balancing in data center networks | |
CN108111567A (en) | Realize the uniform method and system of server load | |
Sahoo et al. | Ferrying vehicular data in cloud through software defined networking | |
CN108804535A (en) | Software definition storage with network hierarchy(SDS)System | |
Hwang et al. | Load balancing and routing mechanism based on software defined network in data centers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |