CN104796494A - Data transmission method for cloud platform - Google Patents

Data transmission method for cloud platform Download PDF

Info

Publication number
CN104796494A
CN104796494A CN201510232702.7A CN201510232702A CN104796494A CN 104796494 A CN104796494 A CN 104796494A CN 201510232702 A CN201510232702 A CN 201510232702A CN 104796494 A CN104796494 A CN 104796494A
Authority
CN
China
Prior art keywords
server
user
resource
affairs
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510232702.7A
Other languages
Chinese (zh)
Inventor
高爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Boyuan Technology Co Ltd
Original Assignee
Chengdu Boyuan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Boyuan Technology Co Ltd filed Critical Chengdu Boyuan Technology Co Ltd
Priority to CN201510232702.7A priority Critical patent/CN104796494A/en
Publication of CN104796494A publication Critical patent/CN104796494A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The invention provides a data transmission method for a cloud platform. The method comprises the steps: after a master server receives a user request, the master server selects internal resources for storing data backup, and selects resources of other servers for storing the data backup when internal storage resources are insufficient; when the master server selects the other servers, the master server determines a storage location for the data backup according to storage overhead and communication overhead among the servers. The method provided by the invention has the advantages that the resource utilization ratio of a cloud platform system is increased, and the load balancing effect of the cloud platform system is improved.

Description

A kind of cloud platform data transmission method
Technical field
The present invention relates to cloud computing, particularly a kind of cloud platform data transmission method.
Background technology
Cloud storage has the feature of high reliability, high scalability, cheap low expense.Every platform storing machine is all an independently memory node, and adding and the normal operation being separated not influential system of node, the data volume of file is large; In process massive video data, there is advantage.But, because video data deblocking quantity is a lot, and each deblocking and version thereof are stored in the different nodes in cloud platform, therefore in great deal of nodes, need the optimal scheduling carrying out affairs, and scheduling at present in cloud platform and resource allocation mechanism are one local scheduling mechanism statically, when carrying out Resource Distribution and Schedule, only consider the current resource status of range data nearest node, and do not carry out the optimal allocation of resource from the angle of the system overall situation.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of cloud platform data transmission method, comprising:
After master server receives user's request, select internal resource to be used for storing data backup, when internal storage resources is not enough, select the resource of other servers for storing data backup;
When selecting other servers, master server determines the memory location of data backup according to the storage overhead between server and communication overhead, and user data transmission is realized backup to determined memory location.
Preferably, before described master server receives user's request, also comprise, user submits the application used in master server to Cloud Server by cloud Platform communication interface, the state that inquiry transaction performs, and inspection institute stores the integrality of data
Master server comprises Resource Monitor, for monitoring the situation of change of Cloud Server resource, collect the resource information of each server, for scheduler provides the current resource status of each server: described scheduler is according to the resource information of the transactions requests information of user and current each server, by multiple data backups of user's business according to predefine strategy dynamic memory to the resource of different server, and scheduling result is fed back to cloud Platform communication interface, when there being new user's business to arrive, data backup distributes according to storage overhead and communication overhead by scheduler.
Preferably, after master server provides and after scheduling strategy calculates the data backup of a user's business is stored into other Cloud Server, this backup is stored in always on corresponding Cloud Server, until the memory time of user's application arrives, and again the data backup be stored on other Cloud Servers is not write back on the cloud platform of master server in master server resource space idle;
Each data backup cannot be carried out piecemeal again and be stored on different server platforms, and a backup can only be stored in the hardware resource of a server simultaneously;
With four-tuple { t ij, S ij, T ij, D ijrepresent the resource request information of each affairs, wherein, server is current W user U 1..., U w, each user has multiple affairs N 1..., N w, t ijrepresent a jth subtransaction of i-th user, i=1,2 ..., W; J=1,2 ..., N i; S ij, T ijand D ijrepresent affairs t respectively ijtime started, memory time and memory capacity
The data backup expense cost of server is calculated as follows:
cos t = Σ i = 1 W Σ j = 1 Ni Σ k = 1 L ( T ijk × D ij × O ijk + D ij × y ijk )
Wherein, L is server total quantity, T ijkrepresent the time of a data back-up storage in a kth server resource of the affairs j of user i; D ijrepresent the memory data output of a jth affairs of user i; O ijkrepresent the versions of data number of a jth affairs of a kth server stores user i; y ijka Boolean type variable, when storing the backup of the affairs j of user i in a kth server, y ijkequal 1, otherwise y ijkbe 0.
The present invention compared to existing technology, has the following advantages:
The method that the present invention proposes improves the resource utilization of cloud plateform system and the effect of load balancing.
Accompanying drawing explanation
Fig. 1 is the flow chart of the cloud platform data transmission method according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
The present invention, under cloud computing environment, carries out modeling to the affairs of processing video data and transaction scheduling and resource allocation methods, proposes a kind of transaction scheduling method towards video data under this model.Parallel processing transaction is planned based on the piecemeal multi version file layout of the method video data file in cloud storage system, with the mapping relations of data block and version for feature performs cluster to back end in cloud platform, based on the feedback information of the complete affairs of executed, carry out dynamic dispatching do not perform affairs, improve the effect of resource utilization ratio and load balancing with this.Fig. 1 is the cloud platform data transmission method flow chart according to the embodiment of the present invention.
User's request in Video Applications is submitted to the client of cloud storage system with the form of mission description file.When affairs are submitted, client is divided into multiple subtransaction according to deblocking information these affairs and carrys out executed in parallel.Each subtransaction associates a data block.In same like this affairs, different subtransaction just can be distributed on different nodes and perform, and can avoid the bottleneck caused due to resource contention between subtransaction, the feature that in adaptive video data handling procedure, resource consumption is high better.Deblocking information is described by tree Tr:
Tr=(C, R), wherein C is a data element set, and element wherein represents a data block; R is a binary crelation set, and whether the content that element wherein represents two data blocks is identical.Element c in data element set C ican describe with two tuple vectors: c i=< blocki, a i>
Wherein blockirepresent the numbering of data block i, a irepresent the numbering of data block i place node.Element r in binary crelation set R i, jrepresent that data block j is the version of data block i.Be submitted to the client of cloud storage system in random time affairs, the transaction flow being submitted to client can be described as J={J 0, J 1, J 2, J i..., J n-1, J n}
Wherein j ican describe with hexa-atomic group of vector: j i=<transid i, dec id i, file id i, level i, rcv_t i, end_t i>, wherein transid is affairs numberings, dec id is mission description reference number of a document, file id is that affairs need video data file to be processed numbering, and level is the priority of affairs, and rcv_t is the submission time of affairs, and end_t has been the deadline of these affairs.The affairs j submitted to imultiple subtransaction is divided into by client.Subtransaction after division can describe with a directed graph GR:
GR=<V,E>
Each element in vertex set V represents a subtransaction.For subtransaction v idescribe with nine tuple vectors: v i=<taskid i, transid i, type i, wl i, cpu i, mem i, disk i, band i, block i>
Taskid is subtransaction numbering, transid is affairs numbering belonging to subtransaction, type is subtransaction type, wl load capacity (1,000,000 instruction-level), cpu, mem, disk, band takies abstract resource (CPU, internal memory, disk and the bandwidth) quantity in virtual machine when being subtransaction execution.Although in the abstract resource of virtual machine, cpu resource can between multiple different subtransaction time-sharing multiplex, but quite a few video program belongs to computation-intensive affairs (such as encoding and decoding), quite large for cpu resource consumption, multiple computation-intensive affairs are run simultaneously on a CPU core, execution speed can be caused sharply to reduce, therefore need the quantity of the computation-intensive affairs on restriction processor.Therefore also cpu resource is abstracted into quantity to measure, a processor is abstracted into 4 cpu resources.Blockid is that affairs need data block to be processed numbering.Each element in limit set E represents a dependence between two subtransactions.E i, j(e i, j∈ E) represent subtransaction v isubtransaction v jforerunner, subtransaction v jsubtransaction v ifollow-up.
In conjunction with the feature of video data process, in the transaction scheduling and resource allocator model of the present invention's proposition, each back end has a dispatching process.These dispatching processes know the current state of the virtual machine of their place back end.And the dispatching process of different node can communicate with the dispatching process of client, transmit transaction scheduling and control information.After affairs are divided into subtransaction according to the video data blocking information tree Tr inquired by the dispatching process of client, according to binary crelation set R, back end is performed cluster, clustering rule is as follows:
A i∈ D m, for any a iif, a j∈ D m, a i≠ a j, then there is r i, j∈ R
The all back end storing same data block m form a cluster set D m, a ifor cluster set D man interior node.The dispatching process of client is from improving the angle of resource utilization and load balancing at D minterior selection optimal node is executed the task.
If the dispatching process of back end receives affairs, then this affairs or insert the wait transaction queues of this node according to its priority, or the affairs seizing other low priority perform immediately.After affairs are finished, the dispatching process of back end just notifies the dispatching process of client, so that the dispatching process of client carries out the scheduling of subsequent transactions.
In cloud storage system, the number of nodes of the data block distribution of video data is huge, and each node therefore storing video data data block needs the load effectively reducing management node.In the transaction scheduling model that the present invention proposes, node has the ability of distributing own resource and transaction scheduling, has the ability of statistics own resource and load condition.
In the Dynamic Resource Allocation for Multimedia proposed in the present invention and the method for transaction scheduling, the dispatching process of client c is to cluster set D corresponding to data block m minterior all back end (a 1, a 2, a 3) send transaction request.D minterior all back end according to internal resource and current loading condition, and utilize the dynamic factors extracted by history run state information to calculate affairs v iat the estimation implementation effect of this node, and feed back to client dispatching process.Client dispatching process selects the node that can complete these affairs the earliest to perform affairs v according to feedback information i.Thisly dominate scheduling process with client, back end dominates the mode of Resourse Distribute and dynamical feedback to the load balancing being conducive to whole cloud storage system.
1st step.The dispatching process of client c needs the blocking information tree Tr of video file to be processed to management node inquiry transaction.
2nd step.The dispatching process of client c is divided into multiple subtransaction according to the blocking information of video file these affairs, wherein each subtransaction process data block, and the result after division exports with the form of directed graph GR.
3rd step.Judge the subtransaction whether existed in GR without forerunner, if there is no, then wait for.Otherwise, from GR, select a subtransaction all completed without forerunner or all forerunner's subtransactions, be assumed to be v i.
4th step.Client c is by deblocking inforamtion tree Tr and subtransaction v idata block numbering block in descriptor icluster is performed to node, exports a node set D blocki, D blocki={ a 1, a 2, a j..., a n-1.Client c is to D blockiin all nodes send transaction request, contain subtransaction v in transaction request idescriptor.
5th step.Receive the node a of transaction request jaccording to subtransaction v iload capacity (wl i), transaction types and node a jresource characteristics, estimator affairs v iat node a jestimation time ETM' i, j.The present invention element ETM of transaction time matrix i, jrepresent subtransaction v iat node a jestimation time.As subtransaction v iload capacity very large time, have larger deviation between estimated value with actual value.Therefore an adjustment factor fm is introduced k, jcome ETM' i, jcalibrate, obtain final TM i, j.
ETM i,j=ETM' i,j×fm k,j
Wherein fm k, jfor kth class affairs are about node a jadjustment factor.Adjustment factor fm k, jreflect node a jperform the historical characteristics of kth class affairs.Fm k, jinitial value be 1.
fm' k,j=Tact i,j/ETM i,j
fm k,j=(fm k,j×λ+fm' k,j×(1-λ))/2
Fm k, ja updated value, node a jafter often executing the subtransaction of a kth class, then calculate fm' k, j, Tact k, jfor subtransaction k is at node a jon the actual treatment time (comprise subtransaction start perform before or be preempted the rear wait resource ready time).If
| fm' k, j-1|> Δ fm, then this fm' k, jgive up, Δ fm is a predefine threshold value, and λ is predefined weights, regulates fm respectively k, jrenewal speed and amplitude.If fm k, jrenewal amplitude is too little, then as node a joverall load when changing, its value is easily aging, affects ETM i, jthe accuracy estimated.If fm k, jrenewal amplitude is too large, then its value is easily by the impact of indivedual local affairs, becomes unstable, affects ETM equally i, jthe accuracy estimated.
6th step.According to the feature carrying out preempting resources in transaction scheduling model according to priority, the present invention proposes resource queue's management method to calculate subtransaction v simultaneously iat node a jin resource ready time, be assumed to be ETA i, j.With node a jin the transaction queues version P, the node a that are being performed jin waiting for transaction queues version W, the node a of resource jmiddle current idle resource quantity ar j; Subtransaction v idescriptor for input; With subtransaction v iat node a jresource ready time ETA i, jexport.
1.ETA i, jbe initialized as current time.
2. affairs all in queue P are pushed into queue G successively.
3. affairs all in queue W are pushed queue G successively, suppose that the length of queue G is n.
4. according to subtransaction v ipriority, by subtransaction v iinsert queue G, suppose subtransaction v iposition in queue G is i.
5. suppose subtransaction v ithe resource needed is r i, judge ar jwhether be greater than r i, in this way, then turn the 14th step.
6. suppose that k is subtransaction v kposition in queue G.For each k=i+1 ..., n, if v k∈ P, then lr j=lr j+ r k, and v is removed from queue C k.
7. the affairs performed in queue G are pushed queue G successively 1, by queue G etc. pending affairs push queue G successively 2.
8., according to the estimation time of each affairs and processing time (not comprising the stand-by period after being preempted), calculate queue G 1in current residual processing time of each affairs, and carry out descending sort according to the current residual processing time of each affairs.The affairs that those current residual processing times can be allowed few complete as early as possible, releasing resource.
9. suppose queue G 1current head of the queue affairs are vh 1, upgrade ETA i, jand ar j:
ETA i, j=ETA i, j+ vh 1the current residual processing time, ar j=ar j+ rh 1.
10. eject queue G 1head of the queue affairs vh 1.
11. hypothesis queue G 2current head of the queue affairs are vh 2, judge ar jwhether be greater than rh 2, if not, then turn the 9th step.
12. by queue G 2in head of the queue affairs vh 2eject, and pushed queue G 1afterbody, and upgrade arj:
ar j=ar j-rh 2
13. judge queue G 2whether be empty, as no, then turn the 9th step.
14. export ETA i, j, terminate.
7th step.Calculate affairs v iat node a jθ on earliest finish time i, j.
θ i,j=ETA i,j+ETM i,j
ETM i, jand ETA i, jthese two factors all can affect the accuracy of θ, and then directly affect the optimality of transaction scheduling and Resourse Distribute.For first factor ETM i, jimpact, by the adjustment factor fm in the 5th step k, jcome effectively to suppress.For second factor ETA i, jimpact, also need to introduce again an adjustment factor fd jto in the computational process of θ.
Suppose affairs v xfor node a jthe historical transactions be finished.Executing affairs v xafter, calculate node a at once jupper up-to-date adjustment factor fd j.
fdj=α×(Tact_start x-ETA x,j)/ETA x,j
Wherein Tact_start xfor affairs v xreal resource ready time (with respect to the resource ready time ETA that resource queue's management method is calculated x, j), α is predefine constant.Adjustment factor fd jthe meaning shown is: by the time quantity of time delay within the unit stand-by period.α, as a parameter, regulates adjustment factor fd jθ is carried out to the amplitude of dynamic correcting.Due to the existence of preemption events, the adjustment factor fd calculated by the implementation effect of high priority affairs jobviously less.The present invention expands fd for this reason jdimension be bidimensional, i.e. fd j, l, wherein j representation node, l represents priority.Affairs for different priorities use different adjustment factor fd jcalculate affairs θ on earliest finish time.Introduce adjustment factor fd jafter, the θ fd on earliest finish time that affairs are final i, jcomputational methods as follows.
θfd i,j=(1+fd j)×ETA i,j+ETM i,j
8th step.Node a jsubtransaction v is fed back to client c iθ fd on earliest finish time i, j.Client c is from D blocki={ a 1, a 2, a j..., a n-1middle selection node a xperform subtransaction v i.Wherein θ fd i, xmeet:
θfd i,x=min(θfd i,1,θfd i,2,…,θfd i,n-1)
Subtransaction v inode a is assigned to by client c xafter, insert the wait transaction queues of node ax according to its priority, or the affairs seizing other low priority perform immediately.
9th step.Client c judges whether GR is empty, and as being sky, then scheduling process terminates.If not for idling up to the 3rd step.
During in order to avoid a large amount of high priority affairs arrival node, in node, the stand-by period of low priority affairs in node waiting list constantly extends or is constantly preempted in execution queue, be absorbed in the state that the actual treatment time constantly extends, each node travels through the affairs in its waiting list with a predefine frequency, calculate its execution efficiency θ.
θ=γ(Tacte y/ETM y,j)
Wherein Tacte yfor affairs v ythe current actual treatment time (comprising the time be preempted and in waiting list), γ is a predefine constant.By v ypriority improve θ rank, simultaneously by Tacte yreset, then according to affairs v ypriority, again by affairs v yinsert in the waiting list of node or seize low priority affairs to perform.
In order to meet the transactions requests of user, reduce failure recovery time, realize the data backup of lower expense, the present invention is further for data storage-type affairs, backed up by other server resources, when carrying out data backup operation, server priority selects expense lower and the internal resource that failure recovery time is shorter is used for storing data backup, when internal storage resources is not enough, select the resource of other servers for storing data backup.
By server 1as the master server accepting user's business request, server 2~ server lother servers that data backup service can be provided.After master server receives user's request, reasonably can select the memory location of data backup according to storage overhead, communication overhead, select different server resources for storing data backup.
In master server, user can submit by cloud Platform communication interface the application used to Cloud Server, the state that inquiry transaction performs, and inspection institute stores the integrality of data; The situation of change of Resource Monitor charge of overseeing Cloud Server resource, collects the resource information of each server, for scheduler provides the current resource status of each server; Multiple data backups of user's business, according to the resource information of the transactions requests information of user and current each server, are dynamically stored in the resource of different servers according to predefine strategy, and scheduling result are fed back to cloud Platform communication interface by scheduler.When there being new user's business to arrive, data backup distributes by scheduler, to realize the backup services of lower expense.
After scheduling strategy calculates, cloud platform server is stored in the data backup of a user's business when master server provides k(k=1,2 ..., L) after, then this backup will be stored in until arrive the memory time of user's application on corresponding cloud platform always, and the data backup be stored on other cloud platforms can not be write back on the cloud platform of master server in master server resource space idle again.
Each data backup cannot be stored on different server platforms by piecemeal again, and namely a backup can only be stored in the hardware resource of a server simultaneously.In order to avoid the redundancy of data, work as server kreceive server 1the request of a data back-up storage after, only need store a versions of data.
If server is current W user U 1..., U w, each user has multiple affairs N 1..., N w, the beginning processing time of each subtransaction can be different.With a four-tuple { t ij, S ij, T ij, D ijrepresent the resource request information of each affairs, wherein, t ijrepresent a jth subtransaction of i-th user, i=1,2 ..., W; J=1,2 ..., N i; S ij, T ijand D ijrepresent affairs t respectively ijtime started, memory time and memory capacity.
In the present invention, all storage backups can be stored to different server server according to overhead cost 1..., server lresource in.Assuming that the bandwidth that each server data stores is respectively BW 1..., BW l.
Preferably, the data backup expense cost of server mainly considers storage overhead and communication overhead 2 part of data:
cos t = &Sigma; i = 1 W &Sigma; j = 1 Ni &Sigma; k = 1 L ( T ijk &times; D ij &times; O ijk + D ij &times; y ijk )
Wherein, T ijkrepresent the time of a data back-up storage in a kth server resource of the affairs j of user i; D ijrepresent the memory data output of a jth affairs of user i; O ijkrepresent the versions of data number of a jth affairs of a kth server stores user i; y ijkbe a Boolean type variable, work as server kin (work as O when having the backup of the affairs j of user i ijkwhen being more than or equal to 1), y ijkequal 1, otherwise y ijkbe 0.
In sum, the method that the present invention proposes improves the resource utilization of cloud plateform system and the effect of load balancing.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (3)

1. a cloud platform data transmission method, is characterized in that, comprising:
After master server receives user's request, select internal resource to be used for storing data backup, when internal storage resources is not enough, select the resource of other servers for storing data backup;
When selecting other servers, master server determines the memory location of data backup according to the storage overhead between server and communication overhead, and user data transmission is realized backup to determined memory location.
2. method according to claim 1, is characterized in that, before described master server receives user's request, also comprise, user submits the application used in master server to Cloud Server by cloud Platform communication interface, the state that inquiry transaction performs, and inspection institute stores the integrality of data;
Described master server comprises Resource Monitor, for monitoring the situation of change of Cloud Server resource, collect the resource information of each server, for scheduler provides the current resource status of each server: described scheduler is according to the resource information of the transactions requests information of user and current each server, by multiple data backups of user's business according to predefine strategy dynamic memory to the resource of different server, and scheduling result is fed back to cloud Platform communication interface, when there being new user's business to arrive, data backup distributes according to storage overhead and communication overhead by scheduler.
3. method according to claim 2, it is characterized in that, after master server provides and after scheduling strategy calculates the data backup of a user's business is stored into other Cloud Server, this backup is stored in always on corresponding Cloud Server, until the memory time of user's application arrives, and again the data backup be stored on other Cloud Servers is not write back on the cloud platform of master server in master server resource space idle;
Each data backup cannot be carried out piecemeal again and be stored on different server platforms, and a backup can only be stored in the hardware resource of a server simultaneously;
With four-tuple { t ij, S ij, T ij, D ijrepresent the resource request information of each affairs, wherein, server is current W user U 1..., U w, the transactions that each user has is respectively N 1..., N w, t ijrepresent a jth subtransaction of i-th user, i=1,2 ..., W; J=1,2 ..., N i; S ij, T ijand D ijrepresent affairs t respectively ijtime started, memory time and memory capacity
The data backup expense cost of server is calculated as follows:
cos t = &Sigma; i = 1 W &Sigma; j = 1 Ni &Sigma; k = 1 L ( T ijk &times; D ij &times; O ijk + D ij &times; y ijk )
Wherein, L is server total quantity, T ijkrepresent the time of a data back-up storage in a kth server resource of the affairs j of user i; D ijrepresent the memory data output of a jth affairs of user i; O ijkrepresent the versions of data number of a jth affairs of a kth server stores user i; y ijka Boolean type variable, when storing the backup of the affairs j of user i in a kth server, y ijkequal 1, otherwise y ijkbe 0.
CN201510232702.7A 2015-05-08 2015-05-08 Data transmission method for cloud platform Pending CN104796494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510232702.7A CN104796494A (en) 2015-05-08 2015-05-08 Data transmission method for cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510232702.7A CN104796494A (en) 2015-05-08 2015-05-08 Data transmission method for cloud platform

Publications (1)

Publication Number Publication Date
CN104796494A true CN104796494A (en) 2015-07-22

Family

ID=53561002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510232702.7A Pending CN104796494A (en) 2015-05-08 2015-05-08 Data transmission method for cloud platform

Country Status (1)

Country Link
CN (1) CN104796494A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018059238A1 (en) * 2016-09-30 2018-04-05 杭州海康威视数字技术股份有限公司 Cloud storage based data processing method and system
WO2019144846A1 (en) * 2018-01-23 2019-08-01 杭州海康威视系统技术有限公司 Storage system, and method and apparatus for allocating storage resources
CN110602156A (en) * 2019-03-11 2019-12-20 平安科技(深圳)有限公司 Load balancing scheduling method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060120411A1 (en) * 2004-12-07 2006-06-08 Sujoy Basu Splitting a workload of a node
CN101631360A (en) * 2009-08-19 2010-01-20 中兴通讯股份有限公司 Method, device and system for realizing load balancing
CN102546782A (en) * 2011-12-28 2012-07-04 北京奇虎科技有限公司 Distribution system and data operation method thereof
CN103379156A (en) * 2012-04-24 2013-10-30 深圳市腾讯计算机系统有限公司 Method, system and device achieving storage space dynamic balancing
CN103763378A (en) * 2014-01-24 2014-04-30 中国联合网络通信集团有限公司 Task processing method and system and nodes based on distributive type calculation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060120411A1 (en) * 2004-12-07 2006-06-08 Sujoy Basu Splitting a workload of a node
CN101631360A (en) * 2009-08-19 2010-01-20 中兴通讯股份有限公司 Method, device and system for realizing load balancing
CN102546782A (en) * 2011-12-28 2012-07-04 北京奇虎科技有限公司 Distribution system and data operation method thereof
CN103379156A (en) * 2012-04-24 2013-10-30 深圳市腾讯计算机系统有限公司 Method, system and device achieving storage space dynamic balancing
CN103763378A (en) * 2014-01-24 2014-04-30 中国联合网络通信集团有限公司 Task processing method and system and nodes based on distributive type calculation system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴伟: "海量存储系统元数据管理的研究", 《中国博士学位论文全文数据库信息科技辑》 *
朱映映等: "云系统中面向海量多媒体数据的动态任务调度算法", 《小型微型计算机系统》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018059238A1 (en) * 2016-09-30 2018-04-05 杭州海康威视数字技术股份有限公司 Cloud storage based data processing method and system
CN107888636A (en) * 2016-09-30 2018-04-06 杭州海康威视数字技术股份有限公司 Data processing method and system based on cloud storage
CN107888636B (en) * 2016-09-30 2020-01-17 杭州海康威视数字技术股份有限公司 Data processing method and system based on cloud storage
US11314539B2 (en) 2016-09-30 2022-04-26 Hangzhou Hikvision Digital Technology Co., Ltd. Cloud storage based data processing method and system
WO2019144846A1 (en) * 2018-01-23 2019-08-01 杭州海康威视系统技术有限公司 Storage system, and method and apparatus for allocating storage resources
US11403009B2 (en) 2018-01-23 2022-08-02 Hangzhou Hikivision System Technology Co., Ltd. Storage system, and method and apparatus for allocating storage resources
CN110602156A (en) * 2019-03-11 2019-12-20 平安科技(深圳)有限公司 Load balancing scheduling method and device

Similar Documents

Publication Publication Date Title
CN104794239A (en) Cloud platform data processing method
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
CN112162865B (en) Scheduling method and device of server and server
US8812639B2 (en) Job managing device, job managing method and job managing program
Sprunt et al. Aperiodic task scheduling for hard-real-time systems
Stankovic et al. Evaluation of a flexible task scheduling algorithm for distributed hard real-time systems
CN111381950B (en) Multi-copy-based task scheduling method and system for edge computing environment
CN114138486B (en) Method, system and medium for arranging containerized micro-services for cloud edge heterogeneous environment
Koole et al. Resource allocation in grid computing
Saraswat et al. Task mapping and bandwidth reservation for mixed hard/soft fault-tolerant embedded systems
CN112416585B (en) Deep learning-oriented GPU resource management and intelligent scheduling method
CN104580306A (en) Multi-terminal backup service system and task scheduling method thereof
CN110308984B (en) Cross-cluster computing system for processing geographically distributed data
Diaz et al. Pessimism in the stochastic analysis of real-time systems: Concept and applications
CN108829512A (en) A kind of cloud central hardware accelerates distribution method, system and the cloud center of calculating power
US6993764B2 (en) Buffered coscheduling for parallel programming and enhanced fault tolerance
CN103294548A (en) Distributed file system based IO (input output) request dispatching method and system
Hu et al. Distributed computer system resources control mechanism based on network-centric approach
CN105373426A (en) Method for memory ware real-time job scheduling of car networking based on Hadoop
CN106201701A (en) A kind of workflow schedule algorithm of band task duplication
CN106201681B (en) Method for scheduling task based on pre-release the Resources list under Hadoop platform
CN104796494A (en) Data transmission method for cloud platform
Hu et al. Job scheduling without prior information in big data processing systems
Benoit et al. Max-stretch minimization on an edge-cloud platform
CN104796493A (en) Information processing method based on cloud computing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150722

RJ01 Rejection of invention patent application after publication