CN110046040A - Distributed task scheduling processing method and system and storage medium - Google Patents
Distributed task scheduling processing method and system and storage medium Download PDFInfo
- Publication number
- CN110046040A CN110046040A CN201910280104.5A CN201910280104A CN110046040A CN 110046040 A CN110046040 A CN 110046040A CN 201910280104 A CN201910280104 A CN 201910280104A CN 110046040 A CN110046040 A CN 110046040A
- Authority
- CN
- China
- Prior art keywords
- node
- resource
- data
- data storage
- storage cell
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000013500 data storage Methods 0.000 claims abstract description 138
- 210000000352 storage cell Anatomy 0.000 claims abstract description 98
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000012545 processing Methods 0.000 claims abstract description 28
- 230000004044 response Effects 0.000 claims abstract description 5
- 238000013519 translation Methods 0.000 claims description 6
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 238000004891 communication Methods 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 26
- 238000005516 engineering process Methods 0.000 description 6
- 241001269238 Data Species 0.000 description 3
- 230000000116 mitigating effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mobile Radio Communication Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present embodiments relate to cloud service technical field, a kind of distributed task scheduling processing method and system and storage medium are disclosed.Comprising: which first node receives the data of resource, the corresponding data storage cell of resource is obtained according to the information of resource, and the data of resource are stored in corresponding data storage cell;Second node returns to the quantity for participating in handling the third node of data of the resource to third node in response to the request of third node;Third node determines corresponding target data storage unit according to the quantity of data storage cell in the quantity and first node of the third node for the data for participating in process resource, and the data for obtaining from target data storage unit corresponding resource are handled.The present invention can not only reduce the competition bring consumption to processing task, but also can guarantee that all tasks are handled in time.
Description
Technical field
The present invention relates to cloud service technical field, in particular to a kind of distributed task scheduling processing method and system and storage are situated between
Matter.
Background technique
With the development of science and technology, cloud service using more and more extensive.In cloud service system, monitored by some nodes
Whether the data such as the utilization rate of the resource of the CPU and memory of the virtual machine of cloud service etc. need dilatation for user's decision.Together
When, it is also necessary to which are done by other processing, such as calculates CPU usage average value for the data of resource etc., preferably to safeguard that cloud takes
Business.When the data to resource do other processing, such as a related data of a resource is received per minute, and need
Unit time has handled the new data received.Wherein, related between the data of same resource, and between the data of different resource
It is irrelevant, therefore the data of the same resource cannot avoid the occurrence of mistake simultaneously by multiple node processings.Due to cloud service system
The resource of system constantly increases, data volume to be treated also sustainable growth, thus using distributed system to the data of resource into
Row processing.Distributed system can flexibly increase the processing capacity of system convenient for extension.
Using the data processing task of following methods distribution resource in existing distributed system: will be connect by first node
The new data received is put into a set (such as in data set of Redis), handles the node (hereinafter referred to as second of data
Node) by traversing this set, it gets all data to be treated and is obtained according to the corresponding relationship of data and resource
All resources to be treated, and resource being ranked up, while obtaining all second nodes, and be ranked up, then basis
The sequence of this second node and task quantity to be treated, intercept the resource conduct of corresponding number in order from all resources
This second node task to be treated.Wherein, the resource quantity of each second node processing is, for example, process number multiplied by each
The number of tasks (constant) of process.It is carried out due to receiving data and processing data synchronous dynamic, so all second nodes are not same
When obtain task, therefore, all task resource lists that each second node is got may be inconsistent, and each second node
Only interception belongs to the task of this node, therefore some possible task is not previously allocated away between the task of interception, is led
Cause task within this minute without processed, and may cause the same resource data for a long time it is not processed.For example,
First node receives 50 new datas, and second node A obtains that current there are two second node (second node A and B) at this time
It is jointly processed by new data, then the new data that second node A can intercept 25 resources is handled, when second node B is handled
When, first node receives 10 new datas again, then second node B, which obtains each node, should handle 30 new datas, therefore
Two node B can intercept the 31st~60 new data and be handled, and such 25th~30 data cannot may then be located always
Reason, therefore, it is necessary to the node sequences of the service of restarting, the time and each second node that change each second node acquisition task list to come
Handle no processed task.Meanwhile in the above method each second node in the distribution of task it is possible that competition, lead
Additional time loss is caused, and be can also result in and be arranged in subsequent second node and do not have task processing, processing effect is influenced
Rate.
Summary of the invention
Embodiment of the present invention for one of prior art problem provide a kind of distributed task scheduling processing method and system and
Storage medium can not only reduce the competition bring consumption to processing task, but also can guarantee that all tasks are located in time
Reason.
In order to solve the above technical problems, embodiments of the present invention provide a kind of distributed task scheduling processing method, application
In distributed system, the distributed system includes first node, third node and several second nodes, second section
Point is communicated to connect with the first node and the third node respectively;The described method includes:
The first node receives the data of resource, obtains the corresponding data of the resource according to the information of the resource and deposits
Storage unit, and the data of the resource are stored in corresponding data storage cell;Wherein, the resource and the data are deposited
Storage unit is multiple;
In response to the request of the second node, Xiang Suoshu second node returns to be participated in handling the money third node
The quantity of the second node of the data in source;
The second node is according to the quantity of the second node of the data for participating in handling the resource and described the
The quantity of data storage cell in one node determines corresponding target data storage unit, and deposits from the target data
The data that corresponding resource is obtained in storage unit are handled.
Embodiments of the present invention additionally provide a kind of distributed task scheduling processing system, comprising: first node, third node
And several second nodes, the second node are communicated to connect with the first node and the third node respectively;
The first node is used to receive the data of resource, obtains the corresponding number of the resource according to the information of the resource
It is stored in corresponding data storage cell according to storage unit, and by the data of the resource;Wherein, the resource and the number
It is multiple according to storage unit;
The third node is used to respond the request of the second node, and Xiang Suoshu second node returns described in participation processing
The quantity of the second node of the data of resource;
The second node is used for quantity and the institute of the second node according to the data for participating in handling the resource
The quantity for stating the data storage cell in first node determines corresponding target data storage unit, and from the number of targets
It is handled according to the data for obtaining corresponding resource in storage unit.
Embodiments of the present invention additionally provide a kind of storage medium, for storing computer-readable program, the calculating
Machine readable program is used to execute distributed task scheduling processing method as described above for computer.
In terms of existing technologies, the information of first node resource based on the received obtains resource to embodiment of the present invention
Corresponding data storage cell, and the data of resource are stored in corresponding data storage cell, and second node is from first
Node obtains the quantity of data storage cell, and the quantity for participating in the second node of data of process resource is obtained from third node,
And each second node is determined according to the quantity of the second node of the quantity of data storage cell and the data for participating in process resource
Corresponding target data storage unit, the data that corresponding resource is obtained from target data storage unit are handled.Due to
The data correspondence of different resource is stored in different data storage cells, so second node is without traversing all data with true
Determine the corresponding relationship of resource and data, so as to reduce the competition bring consumption to processing task, and second node is
Target data storage unit to be treated is determined according to the quantity of data storage cell, so can guarantee that all resources can
It is handled in time.
As one embodiment, the information according to the resource obtains the corresponding data storage cell of the resource,
It specifically includes: to modulus obtains the corresponding data storage cell of the resource again after the information progress Hash translation of the resource;
Wherein, modulus is the quantity of the data storage cell.
As one embodiment, the second node handles the second node of the data of the resource according to the participation
Quantity and the quantity of the data storage cell in the first node determine corresponding target data storage unit, specifically
It include: the side by the second node that all data storage cell mean allocations are handled to the data of the resource to the participation
Formula obtains corresponding target data storage unit.
It is described by the way that all data storage cell mean allocations to the participation are handled the money as one embodiment
The mode of the second node of the data in source obtains corresponding target data storage unit, specifically includes: all data are deposited
Storage unit is grouped according to the quantity of the second node of the data for participating in handling the resource;It is handled according to the participation
The preset order of the second node of the data of the resource is picked out corresponding data from the data storage cell in every group and is deposited
Target data storage unit of the storage unit as each second node.
As one embodiment, the data that corresponding resource is obtained from the target data storage unit are handled,
Specifically include: be calculated it is following any one or combinations thereof: utilization rate average value, the memory usage average value, URL of CPU
Number of request.
As one embodiment, further includes: the second node is obtaining correspondence from the target data storage unit
Resource data handled after the data crossed of delete processing.
As one embodiment, the first node and the third node are same server.
Detailed description of the invention
Fig. 1 is the structural block diagram of distributed task scheduling processing system according to the present invention;
Fig. 2 is the flow chart of the distributed task scheduling processing method in first embodiment according to the present invention;
Fig. 3 is the flow chart of the distributed task scheduling processing method in second embodiment according to the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention
Each embodiment be explained in detail.However, it will be understood by those skilled in the art that in each embodiment party of the present invention
In formula, many technical details are proposed in order to make reader more fully understand the present invention.But even if without these technical details
And various changes and modifications based on the following respective embodiments, claimed technical solution of the invention also may be implemented.
The first embodiment of the present invention is related to a kind of distributed task scheduling processing methods, applied to distribution as shown in Figure 1
Formula system, the system include: that first node 101, third node 103 and several second nodes 102, second node 102 divide
It is not communicated to connect with first node 101 and third node 103.Wherein, each 102 timing of second node is reported to third node 103
Whether the relevant information of itself, such as second node are in available mode etc., so that the statistics of third node 103 can participate in data
The quantity of the second node of the data of resource.It is so without being limited thereto, for example, first node 101 and third node 103 can be same
Platform server.In present embodiment, first node 101 can also have both the function of second node 102 simultaneously.Present embodiment
Distributed task scheduling processing method includes: the data that first node receives resource, obtains the corresponding number of resource according to the information of resource
It is stored in corresponding data storage cell according to storage unit, and by the data of resource;Wherein, resource and data storage cell are equal
It is multiple;Third node returns to the second section for participating in the data of process resource to second node in response to the request of second node
The quantity of point;Second node is deposited according to the data in the quantity and first node of the second node for the data for participating in process resource
The quantity of storage unit determines corresponding target data storage unit, and corresponding money is obtained from target data storage unit
The data in source are handled.Embodiment of the present invention in terms of existing technologies, the letter of first node resource based on the received
Breath obtains the corresponding data storage cell of resource, and the data of resource are stored in corresponding data storage cell, and second
Node obtains the quantity of data storage cell from first node, and the second section for participating in the data of process resource is obtained from third node
The quantity of point, and determined respectively according to the quantity of the second node of the quantity of data storage cell and the data for participating in process resource
The corresponding target data storage unit of second node, carried out from the data for obtaining corresponding resource in target data storage unit from
Reason.Since the data correspondence of different resource is stored in different data storage cells, so second node is all without traversal
Data are to determine the corresponding relationship of resource and data, so as to reduce the competition bring consumption to processing task, and the
Two nodes are that target data storage unit to be treated is determined according to the quantity of data storage cell, so can guarantee all
Resource can be handled in time.
The realization details of the distributed task scheduling processing method of present embodiment is specifically described below, the following contents
Only for convenience of the realization details provided is understood, not implement the necessary of this programme.
Referring to Fig. 2, the distributed task scheduling processing method of present embodiment includes step 201 to step 103.
Step 201: first node receives the data of resource, obtains the corresponding data of resource according to the information of resource and stores list
Member, and the data of resource are stored in corresponding data storage cell.
Wherein, resource and data storage cell are multiple.Resource is, for example, the CPU of virtual machine in cloud service system, interior
The resource deposited etc., the data of resource are, for example, CPU utilization rate per minute, memory utilization rate etc. per minute.It is so without being limited thereto,
Resource can also be URL, page etc., and the data of resource can be URL rate of people logging in, page request number etc..Present embodiment for
Resource and its data are not specifically limited.Wherein, the data of resource can be monitored to obtain by some nodes in cloud service system,
And report to first node.Distributed system in present embodiment can be a part of cloud service system.
In present embodiment, data storage cell is, for example, data (data) set of the databases such as Redis, this embodiment party
Formula is not particularly limited data storage cell.In present embodiment, the more of fixed quantity can be created in first node
A data storage cell, such as 1024 data storage cells of creation, and 1024 data storage cells are numbered, for example, 1
~1024, present embodiment is not particularly limited the quantity of data storage cell, and the quantity of data storage cell can also be with
Increased according to the scale of resource or reduced, the increase mode of data storage cell quantity is seen below.
It can be to modulus obtains the corresponding data storage list of resource again after the information of resource progress Hash translation in step 201
Member, wherein modulus is the quantity of data storage cell, is obtained so as to the information MAP according to resource corresponding with the resource
Data storage cell.
The calculating process of the mapping is illustrated below: resource is, for example, the CPU of Cloud Server A, the resource
Information can be UUID, which can be indicated using 32 16 system words, for example, 0986e0ac-09e8-4f2d-a3c2-
66a2473ead40.The UUID is obtained into the integer (1236657874137 of 128 binary representation by Hash translation
6259739645328858664875L), corresponding data storage cell then is obtained to the total modulus of data storage cell
Number, if modulus be 1024, then 12366578741376259739645328858664875L%1024 obtains 320, by
In the number of data storage cell since 1, so adding 1 the resource corresponding data can be obtained the result of modulus stores list
The number 321 of member.
Step 201 is through modulo operation in the case where modulus is constant, and the corresponding data storage cell of different resource is not yet
Become, therefore there is fixed corresponding relationship between resource and data storage cell.In some instances, as resource extent is continuous
Increase, the quantity of data storage cell can also be increased, such as the quantity for the data storage cell that is multiplied, so that most
Corresponding relationship between resource and data storage cell remains unchanged.
The data of resource being stored in corresponding data storage cell in step 201, being specifically as follows will be above-mentioned
The data for the resource that UUID is 0986e0ac-09e8-4f2d-a3c2-66a2473ead40 are stored in the data that number is 321 and deposit
In storage unit.
Step 202: third node returns to the data for participating in process resource to second node in response to the request of second node
Second node quantity.
Third node can obtain can be used at the data to resource according to the information that each second node timing reports
The quantity of the second node of reason.In the request for receiving second node, the of the data for participating in process resource can be returned to it
The quantity of two nodes.
Step 203: second node is according in the quantity and first node of the second node for the data for participating in process resource
The quantity of data storage cell determine corresponding target data storage unit, and obtained from target data storage unit
The data of corresponding resource are handled.
Specifically, each second node can be according to the quantity of the second node for the data for participating in process resource to first node
In the quantity of data storage cell be distributed equally to obtain corresponding target data storage unit.So it is not limited to
This, as long as guaranteeing that each data storage cell in first node is corresponding with second processing node.In practical applications,
Second node can also after being handled from the data for obtaining corresponding resource in target data storage unit delete processing mistake
Data, so as to save the memory space of first node.
Wherein, obtained from target data storage unit corresponding resource data carry out processing can be calculated with
Any one or combinations thereof: utilization rate average value, memory usage average value, the URL request number of CPU is descended, so as to for service
Preferably safeguard the service performance of cloud service in side.Present embodiment does not limit process object and processing result specifically
System.
Present embodiment in terms of existing technologies, by the way that multiple data storage cells are pre-created, and according to reception
To the information of resource corresponding data storage cell is quickly mapped resources to by modulo operation etc., and by the data of resource
It stores in corresponding storage unit, so that the data between different resource individually store, substantially mitigating second node is
Consumption caused by avoiding the data to same resource while the competition that is handled, and due to the data in first node
The quantity of storage unit is relatively fixed, and each second node is the number that resource to be processed is obtained as unit of data storage cell
According to so can effectively avoid data by drain process.
Second embodiment of the present invention is related to a kind of distributed task scheduling processing method, and second embodiment is implemented first
The method of each second node mean allocation data processing unit is further defined on the basis of mode, to can ensure that at data
Reason task mean allocation guarantees treatment effeciency to each second node.
Referring to Fig. 3, the distributed task scheduling processing method of present embodiment includes step 301 to step 304.
Wherein, step 301, step 302 are identical as step 201, step 202 correspondence in first embodiment respectively, this
Place repeats no more.
Step 303: passing through the second node by all data storage cell mean allocations to the data for participating in process resource
Mode obtain corresponding target data storage unit.
Wherein, the quantity of data storage cell is the sum for the data storage cell being pre-created in first node, can
To be fixed and invariable, for example, 1024.Second node can be the data being configured in distributed system to resource
The quantity of the node handled, second node can be constant, can also carry out horizontal extension according to the scale of resource, such as increase
Add the quantity of second node.Present embodiment is not particularly limited this.
Step 303 passes through the second node of all data storage cell mean allocations to the data for participating in process resource
It may include: by all data storage cells according to participation process resource that mode, which obtains corresponding target data storage unit,
The quantity of second node of data be grouped, according to the data for participating in process resource second node preset order from every
Target data storage unit of the corresponding data storage cell as each second node is picked out in data storage cell in group.
Wherein, it needs to be ranked up it after obtaining participating in the second node of data of process resource.If the number of data storage cell
Amount is the integral multiple of the quantity of second node and no remainder, then the integral multiple is number of packet, if there is remainder, the integer
Extraordinarily 1 is number of packet.For example, the quantity of second node is, for example, 100, and number is 1~100 respectively, and data are deposited
The quantity of storage unit is 1024, and number is 1~1024 respectively, therefore is classified as 11 according to the number of data storage cell
Group, and preceding 10 groups of every group of sequences according to second node are distributed to a data storage cell respectively to second node, and the 11st
24 data storage cells in group can then distribute one to second node respectively according to vertical sequence.So not
It is limited to this, as long as more can fifty-fifty distribute data storage cell to each second node.
Step 304: the data that corresponding resource is obtained from target data storage unit are handled.
Present embodiment in terms of existing technologies, by the way that multiple data storage cells are pre-created, and according to reception
To the information of resource corresponding data storage cell is quickly mapped resources to by modulo operation etc., and by the data of resource
It stores in corresponding storage unit, so that the data between different resource individually store, substantially mitigating second node is
Consumption caused by avoiding the data to same resource while the competition that is handled, and due to the data in first node
The quantity of storage unit is relatively fixed, and each second node is the number that resource to be processed is obtained as unit of data storage cell
According to so can effectively avoid data by drain process.Also, due to each second node mean allocation data storage cell, so as to
More efficiently to handle data.
Third embodiment of the present invention is related to a kind of distributed task scheduling processing system, please continue to refer to Fig. 1, the system packet
Include: first node 101, third node 103 and several second nodes 102, second node 102 respectively with first node 101
It is communicated to connect with third node 103.Wherein, each 102 timing of second node reports the relevant information of itself to third node 103,
Such as whether second node is in available mode etc., for third node 103 count can participate in data resource data the
The quantity of two nodes.It is so without being limited thereto, for example, first node 101 and third node 103 can be same server.
Wherein, first node 101 is used to receive the data of resource, obtains the corresponding data of resource according to the information of resource and deposits
Storage unit, and the data of resource are stored in corresponding data storage cell;Wherein, resource and data storage cell are more
It is a;
Third node 103 is used to respond the request of second node, and the data for participating in process resource are returned to second node
The quantity of second node;
Second node 102 is used for the quantity and first node of the second node 102 according to the data for participating in process resource
The quantity of data storage cell in 101 determines corresponding target data storage unit, and from target data storage unit
The middle data for obtaining corresponding resource are handled.
In one example, first node 101 is specifically for modulus obtains again after the information progress Hash translation to resource
The corresponding data storage cell of resource.Wherein, modulus is the quantity of data storage cell.Second node 102 is used for by by institute
There is the mode of second node of the data storage cell mean allocation to the data for participating in handling the resource to obtain corresponding
Target data storage unit.Specifically, second node 102 is used for all data storage cells according to the number for participating in process resource
According to the quantity of second node 102 be grouped, and according to the default of the second node 102 for the data for participating in handling the resource
Sequence picks out number of targets of the corresponding data storage cell as each second node 102 from the data storage cell in every group
According to storage unit.Wherein, second node 102 be specifically used for being calculated it is following any one or combinations thereof: the utilization rate of CPU is flat
Mean value, memory usage average value, URL request number.Second node 102 is also used in the acquisition pair from target data storage unit
The data that delete processing is crossed after the data for the resource answered are handled.
Present embodiment in terms of existing technologies, by the way that multiple data storage cells are pre-created, and according to reception
To the information of resource corresponding data storage cell is quickly mapped resources to by modulo operation etc., and by the data of resource
It stores in corresponding storage unit, so that the data between different resource individually store, substantially mitigating second node is
Consumption caused by avoiding the data to same resource while the competition that is handled, and due to the data in first node
The quantity of storage unit is relatively fixed, and each second node is the number that resource to be processed is obtained as unit of data storage cell
According to so can effectively avoid data by drain process.Also, due to each second node mean allocation data storage cell, so as to
More efficiently to handle data.
It is not difficult to find that present embodiment is system embodiment corresponding with second embodiment, present embodiment can be with
Second embodiment is worked in coordination implementation.The relevant technical details mentioned in second embodiment still have in the present embodiment
Effect, in order to reduce repetition, which is not described herein again.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in
In second embodiment.
4th embodiment of the invention is related to a kind of non-volatile memory medium, for storing computer-readable program,
The computer-readable program is used to execute above-mentioned all or part of embodiment of the method for computer.
That is, it will be understood by those skilled in the art that implement the method for the above embodiments be can be with
Relevant hardware is instructed to complete by program, which is stored in a storage medium, including some instructions are to make
It obtains an equipment (can be single-chip microcontroller, chip etc.) or processor (processor) executes side described in each embodiment of the present invention
The all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention,
And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.
Claims (10)
1. a kind of distributed task scheduling processing method, which is characterized in that be applied to distributed system, the distributed system includes the
One node, third node and several second nodes, the second node respectively with the first node and the third section
Point communication connection;The described method includes:
The first node receives the data of resource, obtains the corresponding data of the resource according to the information of the resource and stores list
Member, and the data of the resource are stored in corresponding data storage cell;Wherein, the resource and data storage are single
Member is multiple;
In response to the request of the second node, Xiang Suoshu second node returns to be participated in handling the resource third node
The quantity of the second node of data;
Quantity and the first segment of the second node according to the second node of the data for participating in handling the resource
The quantity of data storage cell in point determines corresponding target data storage unit, and stores list from the target data
The data that corresponding resource is obtained in member are handled.
2. distributed task scheduling processing method according to claim 1, which is characterized in that the information according to the resource
The corresponding data storage cell of the resource is obtained, is specifically included:
To modulus obtains the corresponding data storage cell of the resource again after the information progress Hash translation of the resource;Wherein,
Modulus is the quantity of the data storage cell.
3. distributed task scheduling processing method according to claim 1, which is characterized in that the second node is according to the ginseng
It is true with the quantity of the data storage cell in the quantity of the second node for the data for handling the resource and the first node
Fixed corresponding target data storage unit, specifically includes:
By the side that all data storage cell mean allocations are handled to the second node of the data of the resource to the participation
Formula obtains corresponding target data storage unit.
4. distributed task scheduling processing method according to claim 3, which is characterized in that described by storing all data
Cell-average distributes to the mode of the second node of the data for participating in handling the resource and obtains corresponding number of targets
According to storage unit, specifically include:
All data storage cells are grouped according to the quantity of the second node of the data for participating in handling the resource;
According to the data for participating in handling the resource second node preset order from the data storage cell in every group
In pick out target data storage unit of the corresponding data storage cell as each second node.
5. distributed task scheduling processing method according to claim 1, which is characterized in that from the target data storage unit
The middle data for obtaining corresponding resource are handled, and are specifically included:
Be calculated it is following any one or combinations thereof: utilization rate average value, the memory usage average value, URL request of CPU
Number.
6. distributed task scheduling processing method according to claim 1, which is characterized in that further include: the second node exists
From the data that corresponding resource is obtained in the target data storage unit handled after the data crossed of delete processing.
7. distributed task scheduling processing method according to claim 1, which is characterized in that the first node and the third
Node is same server.
8. a kind of distributed task scheduling processing system characterized by comprising first node, third node and several second
Node, the second node are communicated to connect with the first node and the third node respectively;
The first node is used to receive the data of resource, obtains the corresponding data of the resource according to the information of the resource and deposits
Storage unit, and the data of the resource are stored in corresponding data storage cell;Wherein, the resource and the data are deposited
Storage unit is multiple;
The third node is used to respond the request of the second node, and Xiang Suoshu second node, which returns, to be participated in handling the resource
Data second node quantity;
The second node is used for according to the quantity of the second node of the data for participating in handling the resource and described the
The quantity of data storage cell in one node determines corresponding target data storage unit, and deposits from the target data
The data that corresponding resource is obtained in storage unit are handled.
9. distributed task scheduling processing system according to claim 8, which is characterized in that the first node be specifically used for pair
Modulus obtains the corresponding data storage cell of the resource again after the information progress Hash translation of the resource;Wherein, modulus is
The quantity of the data storage cell.
10. a kind of storage medium, which is characterized in that for storing computer-readable program, the computer-readable program is used for
Such as distributed task scheduling processing method described in any one of claims 1 to 6 is executed for computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910280104.5A CN110046040B (en) | 2019-04-09 | 2019-04-09 | Distributed task processing method and system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910280104.5A CN110046040B (en) | 2019-04-09 | 2019-04-09 | Distributed task processing method and system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110046040A true CN110046040A (en) | 2019-07-23 |
CN110046040B CN110046040B (en) | 2021-11-16 |
Family
ID=67276483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910280104.5A Expired - Fee Related CN110046040B (en) | 2019-04-09 | 2019-04-09 | Distributed task processing method and system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110046040B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112463214A (en) * | 2019-09-09 | 2021-03-09 | 北京京东振世信息技术有限公司 | Data processing method and device, computer readable storage medium and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007017557A1 (en) * | 2005-08-08 | 2007-02-15 | Techila Technologies Oy | Management of a grid computing network using independent softwqre installation packages |
CN103347055A (en) * | 2013-06-19 | 2013-10-09 | 北京奇虎科技有限公司 | System, device and method for processing tasks in cloud computing platform |
CN106095597A (en) * | 2016-05-30 | 2016-11-09 | 深圳市鼎盛智能科技有限公司 | Client data processing method and processing device |
CN108733493A (en) * | 2018-05-25 | 2018-11-02 | 北京车和家信息技术有限公司 | Computational methods, computing device and the computer readable storage medium of resource utilization |
CN108763299A (en) * | 2018-04-19 | 2018-11-06 | 贵州师范大学 | A kind of large-scale data processing calculating acceleration system |
CN108874528A (en) * | 2017-05-09 | 2018-11-23 | 北京京东尚科信息技术有限公司 | Distributed task scheduling storage system and distributed task scheduling storage/read method |
-
2019
- 2019-04-09 CN CN201910280104.5A patent/CN110046040B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007017557A1 (en) * | 2005-08-08 | 2007-02-15 | Techila Technologies Oy | Management of a grid computing network using independent softwqre installation packages |
CN103347055A (en) * | 2013-06-19 | 2013-10-09 | 北京奇虎科技有限公司 | System, device and method for processing tasks in cloud computing platform |
CN106095597A (en) * | 2016-05-30 | 2016-11-09 | 深圳市鼎盛智能科技有限公司 | Client data processing method and processing device |
CN108874528A (en) * | 2017-05-09 | 2018-11-23 | 北京京东尚科信息技术有限公司 | Distributed task scheduling storage system and distributed task scheduling storage/read method |
CN108763299A (en) * | 2018-04-19 | 2018-11-06 | 贵州师范大学 | A kind of large-scale data processing calculating acceleration system |
CN108733493A (en) * | 2018-05-25 | 2018-11-02 | 北京车和家信息技术有限公司 | Computational methods, computing device and the computer readable storage medium of resource utilization |
Non-Patent Citations (2)
Title |
---|
夏玉云: "《岩土工程勘察资料分析系统使用手册及有关算法说明》", 31 July 2018 * |
李清亮: "《雷达地海杂波测量与建模》", 31 December 2017 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112463214A (en) * | 2019-09-09 | 2021-03-09 | 北京京东振世信息技术有限公司 | Data processing method and device, computer readable storage medium and electronic device |
CN112463214B (en) * | 2019-09-09 | 2023-11-03 | 北京京东振世信息技术有限公司 | Data processing method and device, computer readable storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110046040B (en) | 2021-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chang et al. | A load-balance based resource-scheduling algorithm under cloud computing environment | |
Gao et al. | An energy-aware ant colony algorithm for network-aware virtual machine placement in cloud computing | |
CN103927229A (en) | Scheduling Mapreduce Jobs In A Cluster Of Dynamically Available Servers | |
Mosa et al. | Dynamic virtual machine placement considering CPU and memory resource requirements | |
US10884667B2 (en) | Storage controller and IO request processing method | |
CN103631933A (en) | Distributed duplication elimination system-oriented data routing method | |
CN108509256B (en) | Method and device for scheduling running device and running device | |
CN105897864A (en) | Scheduling method for cloud workflow | |
CN110362379A (en) | Based on the dispatching method of virtual machine for improving ant group algorithm | |
CN105488134A (en) | Big data processing method and big data processing device | |
CN111611076B (en) | Fair distribution method for mobile edge computing shared resources under task deployment constraint | |
CN109285015B (en) | Virtual resource allocation method and system | |
CN110167031B (en) | Resource allocation method, equipment and storage medium for centralized base station | |
CN111857992A (en) | Thread resource allocation method and device in Radosgw module | |
CN110046040A (en) | Distributed task scheduling processing method and system and storage medium | |
CN107273413B (en) | Intermediate table creating method, intermediate table inquiring method and related devices | |
CN106681803B (en) | Task scheduling method and server | |
CN108664322A (en) | Data processing method and system | |
Attiya et al. | TCSA: A dynamic job scheduling algorithm for computational grids | |
CN115118732B (en) | Dynamic weight-based consensus method for block chain enabled data sharing | |
CN116089083A (en) | Multi-target data center resource scheduling method | |
WO2018205890A1 (en) | Task assignment method and system of distributed system, computer readable storage medium and computer device therefor | |
CN113434209B (en) | End edge two-layer collaborative computing unloading method, device, terminal and storage medium | |
Shi et al. | Smart shuffling in MapReduce: a solution to balance network traffic and workloads | |
CN103366014A (en) | Cluster-based cloud computing network data processing system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20211116 |