CN104794239A - Cloud platform data processing method - Google Patents

Cloud platform data processing method Download PDF

Info

Publication number
CN104794239A
CN104794239A CN201510232594.3A CN201510232594A CN104794239A CN 104794239 A CN104794239 A CN 104794239A CN 201510232594 A CN201510232594 A CN 201510232594A CN 104794239 A CN104794239 A CN 104794239A
Authority
CN
China
Prior art keywords
affairs
data
client
node
subtransaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510232594.3A
Other languages
Chinese (zh)
Inventor
高爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Boyuan Technology Co Ltd
Original Assignee
Chengdu Boyuan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Boyuan Technology Co Ltd filed Critical Chengdu Boyuan Technology Co Ltd
Priority to CN201510232594.3A priority Critical patent/CN104794239A/en
Publication of CN104794239A publication Critical patent/CN104794239A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a cloud platform data processing method. The method comprises the steps that a video data file in a cloud storage system is stored in a data partitioning and multi-version mode and planned for parallel affair processing, a plurality of data nodes in a cloud platform are clustered with a mapping relation between data blocks and versions as features, and affairs not executed are dynamically dispatched according to the feedback information of the executed affairs. According to the method, the resource utilization rate of the cloud platform system and the load balance effect are improved.

Description

A kind of cloud platform data disposal route
Technical field
The present invention relates to cloud computing, particularly a kind of cloud platform data disposal route.
Background technology
Cloud storage has the feature of high reliability, high scalability, cheap low expense.Every platform storing machine is all an independently memory node, and adding and the normal operation being separated not influential system of node, the data volume of file is large; In process massive video data, there is advantage.But, because video data deblocking quantity is a lot, and each deblocking and version thereof are stored in the different nodes in cloud platform, therefore in great deal of nodes, need the optimal scheduling carrying out affairs, and scheduling at present in cloud platform and resource allocation mechanism are one local scheduling mechanism statically, when carrying out Resource Distribution and Schedule, only consider the current resource status of range data nearest node, and do not carry out the optimal allocation of resource from the angle of the system overall situation.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of cloud platform data disposal route, comprising:
Video data file in cloud storage system is stored with the form of deblocking multi version and plans parallel processing transaction, with the mapping relations between data block and version for feature, cluster is performed to back end multiple in cloud platform, carrys out the unenforced affairs of dynamic dispatching according to the feedback information of executed affairs.
Preferably, described by the video data file in cloud storage system with the form of deblocking multi version store, comprise further:
The user's request of Video Applications is submitted to the client of cloud storage system with the form of mission description file, when affairs are submitted, client is divided into multiple subtransaction according to deblocking information affairs and carrys out executed in parallel, each subtransaction associates a data block, perform so that different subtransaction is distributed on different nodes, described deblocking information is described by tree Tr, Tr=(C, R), wherein C is a data element set, and each element represents a data block; R is whether the content that an each element of binary relation set represents two data blocks is identical, the element c in described data element set C ican describe with two tuple vectors: c i=<block i, a i>, wherein block irepresent the numbering of data block i, a irepresent the numbering of data block i place node, the element r in described binary relation set R i, jrepresent that data block j is the backup version of data block i.
Preferably, each described back end has a dispatching process, the dispatching process of different node communicates with the dispatching process of client according to the current state of the virtual machine of place back end, transmit transaction scheduling and control information, after affairs are divided into subtransaction according to the video data blocking information tree Tr inquired by the dispatching process of client, according to binary relation set R, back end is performed cluster, clustering rule is:
For arbitrary node a i, a i∈ D mif, a j∈ D mand a i≠ a j, then there is r i, j∈ R
Wherein D mfor storing the cluster set of all back end compositions of same data block m, the dispatching process of client is at D minterior selection optimal node is executed the task;
If the dispatching process of back end receives affairs, then these affairs insert the wait transaction queues of this node according to its priority, or the affairs seizing other low priority perform immediately, after affairs are finished, the dispatching process of back end just notifies to make the dispatching process of client carry out the scheduling of subsequent transactions by the dispatching process of client.
The present invention compared to existing technology, has the following advantages:
The method that the present invention proposes improves the resource utilization of cloud plateform system and the effect of load balancing.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the cloud platform data disposal route according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
The present invention, under cloud computing environment, carries out modeling to the affairs of processing video data and transaction scheduling and resource allocation methods, proposes a kind of transaction scheduling method towards video data under this model.Parallel processing transaction is planned based on the piecemeal multi version file layout of the method video data file in cloud storage system, with the mapping relations of data block and version for feature performs cluster to back end in cloud platform, based on the feedback information of the complete affairs of executed, carry out dynamic dispatching do not perform affairs, improve the effect of resource utilization ratio and load balancing with this.Fig. 1 is the cloud platform data process flow figure according to the embodiment of the present invention.
User's request in Video Applications is submitted to the client of cloud storage system with the form of mission description file.When affairs are submitted, client is divided into multiple subtransaction according to deblocking information these affairs and carrys out executed in parallel.Each subtransaction associates a data block.In same like this affairs, different subtransaction just can be distributed on different nodes and perform, and can avoid the bottleneck caused due to resource contention between subtransaction, the feature that in adaptive video data handling procedure, resource consumption is high better.Deblocking information is described by tree Tr:
Tr=(C, R), wherein C is a data element set, and element wherein represents a data block; R is a binary relation set, and whether the content that element wherein represents two data blocks is identical.Element c in data element set C ican describe with two tuple vectors: c i=< blocki, a i>
Wherein blockirepresent the numbering of data block i, a irepresent the numbering of data block i place node.Element r in binary relation set R i, jrepresent that data block j is the copy version of data block i.Be submitted to the client of cloud storage system in random time affairs, the transaction flow being submitted to client can be described as J={J 0, J 1, J 2, J i..., J n-1, J n}
Wherein j ican describe with hexa-atomic group of vector: j i=<transid i, dec id i, file id i, level i, rcv_t i, end_t i>, wherein transid is affairs numberings, dec id is mission description reference number of a document, file id is that affairs need video data file to be processed numbering, and level is the priority of affairs, and rcv_t is the submission time of affairs, and end_t has been the closing time of these affairs.The affairs j submitted to imultiple subtransaction is divided into by client.Subtransaction after division can describe with a digraph GR:
GR=<V,E>
Each element in vertex set V represents a subtransaction.For subtransaction v idescribe with nine tuple vectors: v i=<taskid i, transid i, type i, wl i, cpu i, mem i, disk i, band i, block i>
Taskid is subtransaction numbering, transid is affairs numbering belonging to subtransaction, type is subtransaction type, wl charge capacity (1,000,000 instruction-level), cpu, mem, disk, band takies abstract resource (CPU, internal memory, disk and the bandwidth) quantity in virtual machine when being subtransaction execution.Although in the abstract resource of virtual machine, cpu resource can between multiple different subtransaction time-sharing multiplex, but quite a few video program belongs to computation-intensive affairs (such as encoding and decoding), quite large for cpu resource consumption, multiple computation-intensive affairs are run simultaneously on a CPU core, execution speed can be caused sharply to reduce, therefore need the quantity of the computation-intensive affairs on restriction processor.Therefore also cpu resource is abstracted into quantity to measure, a processor is abstracted into 4 cpu resources.Blockid is that affairs need data block to be processed numbering.Each element in limit set E represents a dependence between two subtransactions.E i, j(e i, j∈ E) represent subtransaction v isubtransaction v jforerunner, subtransaction v jsubtransaction v ifollow-up.
In conjunction with the feature of video data process, in the transaction scheduling and resource allocator model of the present invention's proposition, each back end has a dispatching process.These dispatching processes know the current state of the virtual machine of their place back end.And the dispatching process of different node can communicate with the dispatching process of client, transmit transaction scheduling and control information.After affairs are divided into subtransaction according to the video data blocking information tree Tr inquired by the dispatching process of client, according to binary relation set R, back end is performed cluster, clustering rule is as follows:
A i∈ D m, for any a iif, a j∈ D m, a i≠ a j, then there is r i, j∈ R
The all back end storing same data block m form a cluster set D m, a ifor cluster set D man interior node.The dispatching process of client is from improving the angle of resource utilization and load balancing at D minterior selection optimal node is executed the task.
If the dispatching process of back end receives affairs, then this affairs or insert the wait transaction queues of this node according to its priority, or the affairs seizing other low priority perform immediately.After affairs are finished, the dispatching process of back end just notifies the dispatching process of client, so that the dispatching process of client carries out the scheduling of subsequent transactions.
In cloud storage system, the number of nodes of the data block distribution of video data is huge, and each node therefore storing video data data block needs the load effectively reducing management node.In the transaction scheduling model that the present invention proposes, node has the ability of distributing own resource and transaction scheduling, has the ability of statistics own resource and load condition.
In the Dynamic Resource Allocation for Multimedia proposed in the present invention and the method for transaction scheduling, the dispatching process of client c is to cluster set D corresponding to data block m minterior all back end (a 1, a 2, a 3) send transaction request.D minterior all back end according to internal resource and current loading condition, and utilize the dynamic factors extracted by history run status information to calculate affairs v iat the estimation implementation effect of this node, and feed back to client dispatching process.Client dispatching process selects the node that can complete these affairs the earliest to perform affairs v according to feedback information i.Thisly dominate scheduling process with client, back end dominates the mode of Resourse Distribute and dynamic feedback to the load balancing being conducive to whole cloud storage system.
1st step.The dispatching process of client c needs the blocking information tree Tr of video file to be processed to management node inquiry transaction.
2nd step.The dispatching process of client c is divided into multiple subtransaction according to the blocking information of video file these affairs, wherein each subtransaction process data block, and the result after division exports with the form of digraph GR.
3rd step.Judge the subtransaction whether existed in GR without forerunner, if there is no, then wait for.Otherwise, from GR, select a subtransaction all completed without forerunner or all forerunner's subtransactions, be assumed to be v i.
4th step.Client c is by deblocking inforamtion tree Tr and subtransaction v idata block numbering block in descriptor icluster is performed to node, exports a node set D blocki, D blocki={ a 1, a 2, a j..., a n-1.Client c is to D blockiin all nodes send transaction request, contain subtransaction v in transaction request idescriptor.
5th step.Receive the node a of transaction request jaccording to subtransaction v icharge capacity (wl i), transaction types and node a jresource characteristics, estimator affairs v iat node a jestimation time ETM' i, j.The present invention element ETM of transaction time matrix i, jrepresent subtransaction v iat node a jestimation time.As subtransaction v icharge capacity very large time, have larger deviation between estimated value with actual value.Therefore an adjustment factor fm is introduced k, jcome ETM' i, jcalibrate, obtain final TM i, j.
ETM i,j=ETM' i,j×fm k,j
Wherein fm k, jfor kth class affairs are about node a jadjustment factor.Adjustment factor fm k, jreflect node a jperform the historical characteristics of kth class affairs.Fm k, jinitial value be 1.
fm' k,j=Tact i,j/ETM i,j
fm k,j=(fm k,j×λ+fm' k,j×(1-λ))/2
Fm k, ja updated value, node a jafter often executing the subtransaction of a kth class, then calculate fm' k, j, Tact k, jfor subtransaction k is at node a jon the actual treatment time (comprise subtransaction start perform before or be preempted the rear wait resource ready time).If
| fm' k, j-1|> Δ fm, then this fm' k, jgive up, Δ fm is a predefine threshold value, and λ is predefined weights, regulates fm respectively k, jrenewal speed and amplitude.If fm k, jrenewal amplitude is too little, then as node a joverall load when changing, its value is easily aging, affects ETM i, jthe accuracy estimated.If fm k, jrenewal amplitude is too large, then its value is easily by the impact of indivedual local affairs, becomes unstable, affects ETM equally i, jthe accuracy estimated.
6th step.According to the feature carrying out preempting resources in transaction scheduling model according to priority, the present invention proposes resource queue's management method to calculate subtransaction v simultaneously iat node a jin resource ready time, be assumed to be ETA i, j.With node a jin the transaction queues version P, the node a that are being performed jin waiting for transaction queues version W, the node a of resource jmiddle current idle resource quantity ar j; Subtransaction v idescriptor for input; With subtransaction v iat node a jresource ready time ETA i, jexport.
1.ETA i, jbe initialized as current time.
2. affairs all in queue P are pushed into queue G successively.
3. affairs all in queue W are pushed queue G successively, suppose that the length of queue G is n.
4. according to subtransaction v ipriority, by subtransaction v iinsert queue G, suppose subtransaction v iposition in queue G is i.
5. suppose subtransaction v ithe resource needed is r i, judge ar jwhether be greater than r i, in this way, then turn the 14th step.
6. suppose that k is subtransaction v kposition in queue G.For each k=i+1 ..., n, if v k∈ P, then lr j=lr j+ r k, and v is removed from queue C k.
7. the affairs performed in queue G are pushed queue G successively 1, by queue G etc. pending affairs push queue G successively 2.
8., according to the estimation time of each affairs and processing time (not comprising the stand-by period after being preempted), calculate queue G 1in current residual processing time of each affairs, and carry out descending sort according to the current residual processing time of each affairs.The affairs that those current residual processing times can be allowed few complete as early as possible, releasing resource.
9. suppose queue G 1current head of the queue affairs are vh 1, upgrade ETA i, jand ar j:
ETA i, j=ETA i, j+ vh 1the current residual processing time, ar j=ar j+ rh 1.
10. eject queue G 1head of the queue affairs vh 1.
11. hypothesis queue G 2current head of the queue affairs are vh 2, judge ar jwhether be greater than rh 2, if not, then turn the 9th step.
12. by queue G 2in head of the queue affairs vh 2eject, and pushed queue G 1afterbody, and upgrade arj:
ar j=ar j-rh 2
13. judge queue G 2whether be empty, as no, then turn the 9th step.
14. export ETA i, j, terminate.
7th step.Calculate affairs v iat node a jθ on earliest finish time i, j.
θ i,j=ETA i,j+ETM i,j
ETM i, jand ETA i, jthese two factors all can affect the accuracy of θ, and then directly affect the optimality of transaction scheduling and Resourse Distribute.For first factor ETM i, jimpact, by the adjustment factor fm in the 5th step k, jcome effectively to suppress.For second factor ETA i, jimpact, also need to introduce again an adjustment factor fd jto in the computation process of θ.
Suppose affairs v xfor node a jthe historical transactions be finished.Executing affairs v xafter, calculate node a at once jupper up-to-date adjustment factor fd j.
fdj=α×(Tact_start x-ETA x,j)/ETA x,j
Wherein Tact_start xfor affairs v xreal resource ready time (with respect to the resource ready time ETA that resource queue's management method is calculated x, j), α is predefine constant.Adjustment factor fd jthe meaning shown is: by the time quantity of time delay within the unit stand-by period.α, as a parameter, regulates adjustment factor fd jθ is carried out to the amplitude of dynamic correcting.Due to the existence of preemption events, the adjustment factor fd calculated by the implementation effect of high priority affairs jobviously less.The present invention expands fd for this reason jdimension be bidimensional, i.e. fd j, l, wherein j representation node, l represents priority.Affairs for different priorities use different adjustment factor fd jcalculate affairs θ on earliest finish time.Introduce adjustment factor fd jafter, the θ fd on earliest finish time that affairs are final i, jcomputing method as follows.
θfd i,j=(1+fd j)×ETA i,j+ETM i,j
8th step.Node a jsubtransaction v is fed back to client c iθ fd on earliest finish time i, j.Client c is from D blocki={ a 1, a 2, a j..., a n-1middle selection node a xperform subtransaction v i.Wherein θ fd i, xmeet:
θfd i,x=min(θfd i,1,θfd i,2,…,θfd i,n-1)
Subtransaction v inode a is assigned to by client c xafter, insert the wait transaction queues of node ax according to its priority, or the affairs seizing other low priority perform immediately.
9th step.Client c judges whether GR is empty, and as being sky, then scheduling process terminates.If not for idling up to the 3rd step.
During in order to avoid a large amount of high priority affairs arrival node, in node, the stand-by period of low priority affairs in node waiting list constantly extends or is constantly preempted in execution queue, be absorbed in the state that the actual treatment time constantly extends, each node travels through the affairs in its waiting list with a predefine frequency, calculate its execution efficiency θ.
θ=γ(Tacte y/ETM y,j)
Wherein Tacte yfor affairs v ythe current actual treatment time (comprising the time be preempted and in waiting list), γ is a predefine constant.By v ypriority improve θ rank, simultaneously by Tacte yreset, then according to affairs v ypriority, again by affairs v yinsert in the waiting list of node or seize low priority affairs to perform.
In order to meet the transactions requests of user, reduce failure recovery time, realize the data backup of lower expense, the present invention is further for data storage-type affairs, backed up by other server resources, when carrying out data backup operation, server priority selects expense lower and the internal resource that failure recovery time is shorter is used for storing data backup, when internal storage resources is not enough, select the resource of other servers for storing data backup.
By server 1as the master server accepting user's business request, server 2~ server lother servers that data backup service can be provided.After master server receives user's request, reasonably can select the memory location of data backup according to storage overhead, communication overhead, select different server resources for storing data backup.
In master server, user can submit by cloud Platform communication interface the application used to Cloud Server, the state that inquiry transaction performs, and inspection institute stores the integrality of data; The situation of change of Resource Monitor charge of overseeing Cloud Server resource, collects the resource information of each server, for scheduler provides the current resource status of each server; Multiple data backups of user's business, according to the resource information of the transactions requests information of user and current each server, are dynamically stored in the resource of different servers according to predefine strategy, and scheduling result are fed back to cloud Platform communication interface by scheduler.When there being new user's business to arrive, data backup distributes by scheduler, to realize the backup services of lower expense.
After scheduling strategy calculates, cloud platform server is stored in the data backup of a user's business when master server provides k(k=1,2 ..., L) after, then this backup will be stored in until arrive the storage time of user's application on corresponding cloud platform always, and the data backup be stored on other cloud platforms can not be write back on the cloud platform of master server in master server resource space idle again.
Each data backup cannot be stored on different server platforms by piecemeal again, and namely a backup can only be stored in the hardware resource of a server simultaneously.In order to avoid the redundancy of data, work as server kreceive server 1the request of a data back-up storage after, only need store a versions of data.
If server is current W user U 1..., U w, each user has multiple affairs N 1..., N w, the beginning processing time of each subtransaction can be different.With a four-tuple { t ij, S ij, T ij, D ijrepresent the resource request information of each affairs, wherein, t ijrepresent a jth subtransaction of i-th user, i=1,2 ..., W; J=1,2 ..., N i; S ij, T ijand D ijrepresent affairs t respectively ijstart time, storage time and memory capacity.
In the present invention, all storage backups can be stored to different server server according to overhead cost 1..., server lresource in.Assuming that the bandwidth that each server data stores is respectively BW 1..., BW l.
Preferably, the data backup expense cost of server mainly considers storage overhead and communication overhead 2 part of data:
cos t = &Sigma; i = 1 W &Sigma; j = 1 Ni &Sigma; k = 1 L ( T ijk &times; D ij &times; O ijk + D ij &times; y ijk )
Wherein, T ijkrepresent the time of a data back-up storage in a kth server resource of the affairs j of user i; D ijrepresent the memory data output of a jth affairs of user i; O ijkrepresent the versions of data number of a jth affairs of a kth server stores user i; y ijkbe a Boolean type variable, work as server kin (work as O when having the backup of the affairs j of user i ijkwhen being more than or equal to 1), y ijkequal 1, otherwise y ijkbe 0.
In sum, the method that the present invention proposes improves the resource utilization of cloud plateform system and the effect of load balancing.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (3)

1. a cloud platform data disposal route, is characterized in that, comprising:
Video data file in cloud storage system is stored with the form of deblocking multi version and plans parallel processing transaction, with the mapping relations between data block and version for feature, cluster is performed to back end multiple in cloud platform, carrys out the unenforced affairs of dynamic dispatching according to the feedback information of executed affairs.
2. method according to claim 1, is characterized in that, described by the video data file in cloud storage system with the form of deblocking multi version store, comprise further:
The user's request of Video Applications is submitted to the client of cloud storage system with the form of mission description file, when affairs are submitted, client is divided into multiple subtransaction according to deblocking information affairs and carrys out executed in parallel, each subtransaction associates a data block, perform so that different subtransaction is distributed on different nodes, described deblocking information is described by tree Tr, Tr=(C, R), wherein C is a data element set, and each element represents a data block; R is a binary relation set, and whether the content that each element represents two data blocks is identical, the element c in described data element set C ican describe with two tuple vectors: c i=<block i, a i>, wherein block irepresent the numbering of data block i, a irepresent the numbering of data block i place node, the element r in described binary relation set R i, jrepresent that data block j is the backup version of data block i.
3. method according to claim 2, it is characterized in that, each described back end has a dispatching process, the dispatching process of different node communicates with the dispatching process of client according to the current state of the virtual machine of place back end, transmit transaction scheduling and control information, after affairs are divided into subtransaction according to the video data blocking information tree Tr inquired by the dispatching process of client, according to binary relation set R, back end is performed cluster, clustering rule is:
For arbitrary node a i, a i∈ D mif, a j∈ D mand a i≠ a j, then there is r i, j∈ R
Wherein D mfor storing the cluster set of all back end compositions of same data block m, the dispatching process of client is at D minterior selection optimal node is executed the task;
If the dispatching process of back end receives affairs, then these affairs insert the wait transaction queues of this node according to its priority, or the affairs seizing other low priority perform immediately, after affairs are finished, the dispatching process of back end just notifies to make the dispatching process of client carry out the scheduling of subsequent transactions by the dispatching process of client.
CN201510232594.3A 2015-05-08 2015-05-08 Cloud platform data processing method Pending CN104794239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510232594.3A CN104794239A (en) 2015-05-08 2015-05-08 Cloud platform data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510232594.3A CN104794239A (en) 2015-05-08 2015-05-08 Cloud platform data processing method

Publications (1)

Publication Number Publication Date
CN104794239A true CN104794239A (en) 2015-07-22

Family

ID=53559031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510232594.3A Pending CN104794239A (en) 2015-05-08 2015-05-08 Cloud platform data processing method

Country Status (1)

Country Link
CN (1) CN104794239A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096832A (en) * 2016-06-10 2016-11-09 中山市科全软件技术有限公司 The cloud data managing method in a kind of unmanned supermarket and system
CN106095582A (en) * 2016-06-17 2016-11-09 四川新环佳科技发展有限公司 The task executing method of cloud platform
CN106230982A (en) * 2016-09-08 2016-12-14 哈尔滨工程大学 A kind of dynamic self-adapting secure cloud storage method considering node reliability
CN106406821A (en) * 2016-08-15 2017-02-15 平安科技(深圳)有限公司 Data processing request sorting method and device
CN106445687A (en) * 2016-09-27 2017-02-22 金蝶软件(中国)有限公司 Large transaction execution method and system
CN107181776A (en) * 2016-03-10 2017-09-19 华为技术有限公司 A kind of data processing method and relevant device, system
US11907766B2 (en) 2020-11-04 2024-02-20 International Business Machines Corporation Shared enterprise cloud

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055730A (en) * 2009-11-02 2011-05-11 华为终端有限公司 Cloud processing system, cloud processing method and cloud computing agent device
CN102508902A (en) * 2011-11-08 2012-06-20 西安电子科技大学 Block size variable data blocking method for cloud storage system
CN102546730A (en) * 2010-12-30 2012-07-04 中国移动通信集团公司 Data processing method, device and system
CN102546755A (en) * 2011-12-12 2012-07-04 华中科技大学 Data storage method of cloud storage system
CN103701865A (en) * 2013-12-06 2014-04-02 中国科学院深圳先进技术研究院 Data transmission method and system
US20140164391A1 (en) * 2012-12-12 2014-06-12 Hon Hai Precision Industry Co., Ltd. Data block saving system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055730A (en) * 2009-11-02 2011-05-11 华为终端有限公司 Cloud processing system, cloud processing method and cloud computing agent device
CN102546730A (en) * 2010-12-30 2012-07-04 中国移动通信集团公司 Data processing method, device and system
CN102508902A (en) * 2011-11-08 2012-06-20 西安电子科技大学 Block size variable data blocking method for cloud storage system
CN102546755A (en) * 2011-12-12 2012-07-04 华中科技大学 Data storage method of cloud storage system
US20140164391A1 (en) * 2012-12-12 2014-06-12 Hon Hai Precision Industry Co., Ltd. Data block saving system and method
CN103701865A (en) * 2013-12-06 2014-04-02 中国科学院深圳先进技术研究院 Data transmission method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱映映等: "云系统中面向海量多媒体数据的动态任务调度算法", 《小型微型计算机系统》 *
钟睿明等: "一种成本相关的云提供商数据可靠性保证算法", 《软件学报》 *
项菲等: "新的基于云计算环境的数据容灾策略", 《通信学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107181776A (en) * 2016-03-10 2017-09-19 华为技术有限公司 A kind of data processing method and relevant device, system
CN107181776B (en) * 2016-03-10 2020-04-28 华为技术有限公司 Data processing method and related equipment and system
US10965554B2 (en) 2016-03-10 2021-03-30 Huawei Technologies Co., Ltd. Data processing method and related device, and system
CN106096832A (en) * 2016-06-10 2016-11-09 中山市科全软件技术有限公司 The cloud data managing method in a kind of unmanned supermarket and system
CN106095582A (en) * 2016-06-17 2016-11-09 四川新环佳科技发展有限公司 The task executing method of cloud platform
CN106095582B (en) * 2016-06-17 2019-04-16 四川新环佳科技发展有限公司 The task executing method of cloud platform
CN106406821A (en) * 2016-08-15 2017-02-15 平安科技(深圳)有限公司 Data processing request sorting method and device
WO2018032737A1 (en) * 2016-08-15 2018-02-22 平安科技(深圳)有限公司 Data processing request sorting method and device, terminal, and storage medium
CN106230982A (en) * 2016-09-08 2016-12-14 哈尔滨工程大学 A kind of dynamic self-adapting secure cloud storage method considering node reliability
CN106230982B (en) * 2016-09-08 2019-07-16 哈尔滨工程大学 A kind of dynamic self-adapting secure cloud storage method considering node reliability
CN106445687A (en) * 2016-09-27 2017-02-22 金蝶软件(中国)有限公司 Large transaction execution method and system
US11907766B2 (en) 2020-11-04 2024-02-20 International Business Machines Corporation Shared enterprise cloud

Similar Documents

Publication Publication Date Title
CN104794239A (en) Cloud platform data processing method
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
CN114138486B (en) Method, system and medium for arranging containerized micro-services for cloud edge heterogeneous environment
EP3770774B1 (en) Control method for household appliance, and household appliance
Sprunt et al. Aperiodic task scheduling for hard-real-time systems
US8812639B2 (en) Job managing device, job managing method and job managing program
Koole et al. Resource allocation in grid computing
CN111381950A (en) Task scheduling method and system based on multiple copies for edge computing environment
CN112416585B (en) Deep learning-oriented GPU resource management and intelligent scheduling method
US20120259983A1 (en) Distributed processing management server, distributed system, distributed processing management program and distributed processing management method
Saraswat et al. Task mapping and bandwidth reservation for mixed hard/soft fault-tolerant embedded systems
CN110308984B (en) Cross-cluster computing system for processing geographically distributed data
CN105471985A (en) Load balance method, cloud platform computing method and cloud platform
Lai et al. Sol: Fast distributed computation over slow networks
CN103218233A (en) Data allocation strategy in hadoop heterogeneous cluster
CN106201701A (en) A kind of workflow schedule algorithm of band task duplication
CN103294548A (en) Distributed file system based IO (input output) request dispatching method and system
Hu et al. Distributed computer system resources control mechanism based on network-centric approach
CN105373426A (en) Method for memory ware real-time job scheduling of car networking based on Hadoop
CN104796494A (en) Data transmission method for cloud platform
US20210390405A1 (en) Microservice-based training systems in heterogeneous graphic processor unit (gpu) cluster and operating method thereof
CN108833294B (en) Low-bandwidth-overhead flow scheduling method for data center wide area network
Benoit et al. Max-stretch minimization on an edge-cloud platform
CN103617083A (en) Storage scheduling method and system, job scheduling method and system and management node
CN116302574B (en) Concurrent processing method based on MapReduce

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150722

RJ01 Rejection of invention patent application after publication