CN114968586A - Data scheduling method, device and system - Google Patents

Data scheduling method, device and system Download PDF

Info

Publication number
CN114968586A
CN114968586A CN202210618417.9A CN202210618417A CN114968586A CN 114968586 A CN114968586 A CN 114968586A CN 202210618417 A CN202210618417 A CN 202210618417A CN 114968586 A CN114968586 A CN 114968586A
Authority
CN
China
Prior art keywords
data
scheduled
scheduling
message queue
various different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210618417.9A
Other languages
Chinese (zh)
Inventor
陈志鹏
帅红波
黄显超
谢炜琪
陈戈
梁展瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202210618417.9A priority Critical patent/CN114968586A/en
Publication of CN114968586A publication Critical patent/CN114968586A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a data scheduling method, device and system, which can be applied to the distributed field or the financial field. The method comprises the following steps: the data scheduling end determines resource scheduling weights of various different data to be scheduled and acquires the quantity to be scheduled corresponding to the various different data to be scheduled respectively; and the data scheduling end sequentially puts various different data to be scheduled into the scheduling message queue according to the resource scheduling weight and the quantity to be scheduled, so that at least one data processing end sequentially obtains various different data to be scheduled from the scheduling message queue and calls the data. Therefore, after the data processing end sequentially acquires the data to be scheduled from the scheduling message queue, the data to be scheduled can be called based on the resource scheduling weight, so that concurrent access of resources is realized, reasonable allocation of the resources is realized, and the execution efficiency of the distributed architecture is further improved.

Description

Data scheduling method, device and system
Technical Field
The present application relates to the field of distributed technologies, and in particular, to a data scheduling method and apparatus.
Background
Under the distributed architecture, the distributed application has the characteristics of expandability, high reliability and high performance, and the characteristics of high performance can generate concurrent access to resources. However, resources are limited, and when a plurality of data requiring access to resources, such as data of application services, tasks, entities, and the like, access to resources, the data need to be scheduled to implement resource allocation.
Generally, in most of the existing schemes, data needing to be processed in advance is called preferentially, that is, resources are allocated preferentially to the data needing to be processed in advance, and after the data needing to be processed in advance is executed, the remaining resources are allocated to other data. However, with the development of new technologies such as big data and cloud computing, the traffic volume of the distributed architecture increases dramatically, and if the above scheme is still adopted, a large amount of data that needs to be processed in advance occupies too many resources, and in this case, the resource limitation may cause the amount of resources allocated to the data to be executed later to be less or even no available resources, and further cause the data to be difficult to execute and complete, thereby affecting the execution efficiency of the distributed architecture.
Disclosure of Invention
The embodiment of the application provides a data scheduling method, a data scheduling device and a data scheduling system, so that reasonable allocation of resources is realized, and the execution efficiency of a distributed architecture is improved.
In a first aspect, an embodiment of the present application provides a data scheduling method, including:
a data scheduling end determines resource scheduling weights of various different data to be scheduled and acquires the quantity to be scheduled corresponding to the various different data to be scheduled respectively;
and the data scheduling end sequentially puts the various different data to be scheduled into a scheduling message queue according to the resource scheduling weight and the quantity to be scheduled, so that at least one data processing end sequentially obtains the various different data to be scheduled from the scheduling message queue and calls the data.
Optionally, the data scheduling end sequentially puts the multiple different data to be scheduled into a scheduling message queue according to the resource scheduling weight and the number to be scheduled, including:
the data scheduling end determines the single allowable putting quantity and putting sequence when the various different data to be scheduled are respectively put into the scheduling message queue according to the resource scheduling weight;
the data scheduling end determines the current number to be put in corresponding to the various different data to be scheduled respectively according to the number to be scheduled and the number to be put in allowed for one time;
the data scheduling end determines to-be-put data to be put into the scheduling message queue from the various different to-be-scheduled data according to the current to-be-put quantity and the putting sequence;
when at least one kind of data which is not put into the scheduling message queue exists in the various different data to be scheduled, the data scheduling end takes the at least one kind of data which is not put into the scheduling message queue as new data to be scheduled to re-determine the number of the data to be put into the scheduling message queue, determines new data to be put into the scheduling message queue from the new data to be scheduled according to the re-determined number of the data to be put into the scheduling message queue and the putting sequence, and executes in a circulating mode until the various different data to be scheduled are all put into the scheduling message queue.
Optionally, the determining, by the data scheduling end, the current number of to-be-put-in times respectively corresponding to the multiple different types of data to be scheduled according to the number of to-be-scheduled data and the number of to-be-put-in times allowed for one time includes:
when the quantity to be scheduled is larger than or equal to the single allowable putting quantity, the data scheduling end takes the single allowable putting quantity as the current standby putting quantity;
and when the quantity to be scheduled is smaller than the single allowable input quantity, the data scheduling end takes the quantity to be scheduled as the current allowable input quantity.
Optionally, the multiple different data to be scheduled are multiple different entity data for executing the same task;
the data scheduling end determines the resource scheduling weights of various different data to be scheduled, including:
the data scheduling end acquires the data volume of the multiple different entity data executing the same task;
and the data scheduling end determines the resource scheduling weight according to the data volume of the various different entity data executing the same task.
Optionally, the different data to be scheduled are different entity data for executing different tasks;
the data scheduling end determines resource scheduling weights of various different data to be scheduled, and the method comprises the following steps:
the data scheduling end determines the association relationship between the complexity of different tasks and the data volume of various different entity data according to the processing duration of various different entity data for executing different tasks;
and the data scheduling end determines the resource scheduling weight based on the incidence relation.
In a second aspect, an embodiment of the present application provides a data scheduling method, including:
at least one data processing end obtains various different data to be scheduled from the scheduling message queue in sequence and calls the data; and the data scheduling end sequentially puts the various data to be scheduled into the scheduling message queue according to the determined resource scheduling weight of the various data to be scheduled and the obtained quantity to be scheduled respectively corresponding to the various data to be scheduled.
Optionally, the data scheduling method further includes:
if at least one kind of data to be scheduled changes, the at least one data terminal elects when the quantity of the acquired data to be scheduled is larger than or equal to a set threshold value; and the election is used for selecting one of the at least one data processing terminal as a new data scheduling terminal, and the new data scheduling terminal puts the changed at least one type of data to be scheduled into the scheduling message queue.
In a third aspect, an embodiment of the present application provides a data scheduling apparatus, which is applied to a data scheduling end, and the apparatus includes:
the information acquisition module is used for determining resource scheduling weights of various different data to be scheduled and acquiring the quantity to be scheduled corresponding to the various different data to be scheduled respectively;
and the scheduling message queue determining module is used for sequentially putting the various different data to be scheduled into the scheduling message queue according to the resource scheduling weight and the quantity to be scheduled, so that at least one data processing end sequentially obtains the various different data to be scheduled from the scheduling message queue and calls the data.
In a fourth aspect, an embodiment of the present application provides a data scheduling apparatus, which is applied to a data processing end, and the apparatus includes:
the data to be scheduled calling module is used for acquiring various different data to be scheduled from the scheduling message queue in sequence and calling the data; and the data scheduling end sequentially puts the various data to be scheduled into the scheduling message queue according to the determined resource scheduling weight of the various data to be scheduled and the obtained quantity to be scheduled corresponding to the various data to be scheduled respectively.
In a fifth aspect, an embodiment of the present application provides a data scheduling system, including:
the system comprises a data scheduling end and at least one data processing end;
the data scheduling end is used for determining resource scheduling weights of various different data to be scheduled and acquiring the quantity to be scheduled corresponding to the various different data to be scheduled respectively; according to the resource scheduling weight and the number to be scheduled, putting the various different data to be scheduled into a scheduling message queue;
and the data processing terminal is used for acquiring the various different data to be scheduled from the scheduling message queue and calling the data.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, the data scheduling end can sequentially place various different data to be scheduled into the scheduling message queue according to the determined resource scheduling weights of the various different data to be scheduled and the obtained quantity to be scheduled corresponding to the various different data to be scheduled, and correspondingly, the at least one data processing end can sequentially obtain the various different data to be scheduled from the scheduling message queue and call the data. The data to be scheduled in the scheduling message queue is put in the order constructed based on the respective resource scheduling weights and the number to be scheduled of the various different data to be scheduled, so that the data to be scheduled can be called based on the resource scheduling weights after the data processing end sequentially acquires the data to be scheduled from the scheduling message queue, thereby realizing concurrent access of resources, realizing reasonable allocation of the resources and further improving the execution efficiency of the distributed architecture.
Drawings
Fig. 1 is a schematic diagram of a data scheduling system according to an embodiment of the present application;
fig. 2 is a flowchart of a data scheduling method according to an embodiment of the present application;
fig. 3 is a flowchart of an implementation manner for sequentially placing multiple different data to be scheduled into a scheduling message queue according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a data scheduling apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another data scheduling apparatus according to an embodiment of the present application.
Detailed Description
As described above, the inventors found in the study for the data scheduling method that: in most of the existing schemes, data needing to be processed in advance is called preferentially, that is, resources are allocated preferentially to the data needing to be processed in advance, and after the data needing to be processed in advance is executed, the remaining resources are allocated to other data. With the development of emerging technologies such as big data and cloud computing, the traffic volume of a distributed architecture is increased dramatically, and if the scheme is still adopted, the resource limitation can result in less resource volume allocated to the post-execution data. In this case, if the data executed later has no access to sufficient resources, the data is difficult to execute and complete, and the execution efficiency of the distributed architecture is affected.
In order to solve the foregoing problem, an embodiment of the present application provides a data scheduling method, where the method may include: the data scheduling end can sequentially place the various different data to be scheduled into the scheduling message queue according to the determined resource scheduling weight of the various different data to be scheduled and the obtained quantity to be scheduled corresponding to the various different data to be scheduled respectively, and correspondingly, the at least one data processing end can sequentially obtain the various different data to be scheduled from the scheduling message queue and call the data.
The data to be scheduled in the scheduling message queue is put in the order constructed based on the respective resource scheduling weights and the number to be scheduled of the various different data to be scheduled, so that the data to be scheduled can be called based on the resource scheduling weights after the data processing end sequentially acquires the data to be scheduled from the scheduling message queue, thereby realizing concurrent access of resources, realizing reasonable allocation of the resources and further improving the execution efficiency of the distributed architecture.
It should be noted that the data scheduling end and the data processing end may be respectively deployed in a terminal device or a data processing device such as a server. The terminal device can be a smart phone, a computer or a tablet computer. The server may be a stand-alone server, a cluster server, or a cloud server. In order to facilitate understanding of the technical solutions provided by the embodiments of the present application, the following describes an exemplary data scheduling system provided by the embodiments of the present application with reference to the embodiments and the accompanying drawings.
Fig. 1 is a schematic diagram of a data scheduling system according to an embodiment of the present application. Referring to fig. 1, an embodiment of the present application provides a data scheduling system, which may include a data scheduling end 11 and at least one data processing end 12. As an example, the data scheduling terminal 11 may be deployed in a cloud server, and cooperate with at least one data processing terminal 12 deployed in an independent server, so as to implement reasonable allocation of resources, and further improve the execution efficiency of the distributed architecture.
In an actual application process, taking a scenario in a producer-consumer mode as an example, the data scheduling terminal 11 deployed in the cloud server may serve as a producer, and the at least one data processing terminal 12 deployed in the independent server may serve as a consumer, and the two terminals do not directly communicate with each other, but communicate through the scheduling message queue. Specifically, the data scheduling terminal 11 as a producer may determine resource scheduling weights of multiple different data to be scheduled, obtain respective numbers of the multiple different data to be scheduled, and sequentially place the multiple different data to be scheduled in a scheduling message queue according to the resource scheduling weights and the numbers to be scheduled. Therefore, after the producer produces and obtains the to-be-scheduled quantity and the resource scheduling weight of various different to-be-scheduled data, the various different to-be-scheduled data can be directly put into the scheduling message queue without waiting for the consumer to access and obtain the data. Accordingly, at least one data processing terminal 12 as a consumer can sequentially obtain various different data to be scheduled from the scheduling message queue for consumption and call. In this way, the consumer can directly schedule the message queue to access the desired data without accessing the producer. Therefore, data scheduling can be completed based on the resource scheduling weight, so that the data can access enough resources, reasonable allocation of the resources is realized, decoupling between a producer and a consumer, namely between a data scheduling end and a data processing end, is realized, and the processing performance of the data scheduling end and the data processing end is improved. The specific implementation process can be seen in the introduction given below.
It should be noted that the data scheduling method, device and system provided by the invention can be used in the fields of artificial intelligence, block chain, distribution, cloud computing, big data, internet of things, mobile internet, network security, chip, virtual reality, augmented reality, holography, quantum computing, quantum communication, quantum measurement, digital twinning or finance. The above is merely an example, and does not limit the application field of the data acquisition method, apparatus and system provided by the present invention.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 2 is a flowchart of a data scheduling method according to an embodiment of the present application. Referring to fig. 2, the data scheduling method provided in the present application is implemented by using interaction between a data scheduling end and a data processing end as steps for executing a main description scheme. The data scheduling method may include:
s21: the data scheduling end determines the resource scheduling weight of various different data to be scheduled and obtains the quantity to be scheduled corresponding to the various different data to be scheduled respectively.
The data to be scheduled refers to data to be processed which needs to access resources. The different data to be scheduled may be different entity data for executing the same task, or different entity data for executing different tasks. In practical application, taking banking business as an example, multiple different entity data of the same task are executed, which can be embodied as transaction data and account data corresponding to the statistical analysis task; the execution of various different entity data of different tasks can be embodied as transaction data and account data corresponding to the statistical analysis task and the collection and payment-for-delivery task respectively. Accordingly, the embodiments of the present application may explain a process of determining resource scheduling weights of multiple different data to be scheduled based on different situations of the data to be scheduled.
In one case, the plurality of different data to be scheduled are a plurality of different entity data that perform the same task. Correspondingly, the process for determining resource scheduling weights of multiple different data to be scheduled may specifically include: the data scheduling end acquires the data volume of various different entity data for executing the same task; and the data scheduling end determines the resource scheduling weight according to the data volume of various different entity data executing the same task. Although the same task is executed, the data volumes of the multiple different entity data may be greatly different, so that the resource scheduling weights of the multiple different entity data executing the same task are determined based on the data volumes, which is beneficial to achieving the effect that the entity data can be completed simultaneously when being called by the data processing end, so that the entity data can access enough resources, reasonable allocation of the resources is realized, and the execution efficiency of the distributed architecture is further improved.
In another case, the plurality of different data to be scheduled are a plurality of different entity data for performing different tasks. Correspondingly, the process for determining resource scheduling weights of multiple different data to be scheduled may specifically include: the data scheduling end determines the association relationship between the complexity of different tasks and the data volume of various different entity data according to the processing time of various different entity data for executing different tasks; and the data scheduling end determines the resource scheduling weight based on the incidence relation. Because the complexity of different tasks is different and the difference between the data amounts of multiple different entity data for executing the tasks may also be larger, the resource scheduling weights of the multiple different entity data for executing the different tasks are finally obtained based on the processing time of the multiple different entity data for executing the different tasks, which is beneficial to achieving the effect that the entity data can be completed simultaneously when being called by the data processing end, so that the entity data can access enough resources, the reasonable allocation of the resources is realized, and the execution efficiency of the distributed architecture is further improved.
In another case, to simplify the operation flow, the data scheduling end may further determine the resource scheduling weight based on the number to be scheduled corresponding to each of the plurality of different data to be scheduled. Specifically, the data scheduling end may determine a least common divisor between the numbers to be scheduled corresponding to the different data to be scheduled, and use the least common divisor as the resource scheduling weight.
S22: and the data scheduling end sequentially puts various different data to be scheduled into a scheduling message queue according to the resource scheduling weight and the quantity to be scheduled.
Scheduling the message queue may be implemented by a remote dictionary service (rdis) database. The scheduling message queue realized by utilizing the redis database can well support a distributed architecture by means of the distributivity characteristic of the redis database, and is convenient for realizing data scheduling. And the scheduling message queue realized by utilizing the redis database is beneficial to storing more related information needed by various different data to be scheduled, and the usability of the scheduling message queue is improved.
In the embodiment of the present application, an implementation manner of sequentially placing multiple different data to be scheduled in a scheduling message queue may not be specifically limited, and for convenience of understanding, the embodiment of the present application provides a possible implementation manner, and please refer to the following description for technical details.
S23: and at least one data processing terminal acquires various different data to be scheduled from the scheduling message queue in sequence and calls the data.
Based on the relevant contents of S21-S23, in this embodiment of the application, the data scheduling end may sequentially place the multiple different data to be scheduled into the scheduling message queue according to the determined resource scheduling weights of the multiple different data to be scheduled and the obtained respective corresponding amounts of the multiple different data to be scheduled, and accordingly, the at least one data processing end may sequentially obtain the multiple different data to be scheduled from the scheduling message queue and call the multiple different data to be scheduled. The data to be scheduled in the scheduling message queue is put in the order constructed based on the respective resource scheduling weights and the number to be scheduled of the various different data to be scheduled, so that the data to be scheduled can be called based on the resource scheduling weights after the data processing end sequentially acquires the data to be scheduled from the scheduling message queue, thereby realizing concurrent access of resources, realizing reasonable allocation of the resources and further improving the execution efficiency of the distributed architecture.
In addition, in this embodiment of the application, if at least one type of data to be scheduled changes, at least one data terminal may perform election when the number of the acquired data to be scheduled is greater than or equal to a set threshold. The election may be used to select one of the at least one data processing terminal as a new data scheduling terminal, where the new data scheduling terminal puts the changed at least one data to be scheduled into the scheduling message queue, and the original data scheduling terminal serves as the new data processing terminal. The election means can be embodied as selecting one of the at least one data processing terminal as a new data scheduling terminal through preemption, voting and the like. Specifically, in the embodiment of the present application, at least one data processing side may implement election in a manner of preempting a lock through an atomicity operation of a redis database. Therefore, the election mode can realize the switching between the data scheduling end and the data processing end, avoid performance waste caused by adopting an independent data scheduling end and improve the robustness. In addition, the specific implementation manner of the new data scheduling end placing at least one type of data to be scheduled into the scheduling message queue may refer to the implementation manner of placing a plurality of different types of data to be scheduled into the scheduling message queue in the above embodiment, and details of this application embodiment are not repeated.
In order to implement the concurrent access of the resource based on the resource scheduling weight, the embodiment of the present application further provides an implementation manner of sequentially placing a plurality of different data to be scheduled into the scheduling message queue (i.e., S22), which specifically includes S221-S224. S221 to S224 are described below with reference to the embodiments and the drawings, respectively.
Fig. 3 is a flowchart of an implementation manner of sequentially placing multiple different data to be scheduled into a scheduling message queue according to an embodiment of the present application. As shown in fig. 3, S221-S224 may specifically include:
s221: and the data scheduling end determines the single allowable putting quantity and the putting sequence when various different data to be scheduled are respectively put into the scheduling message queue according to the resource scheduling weight.
The putting order may be embodied as a magnitude order of the resource scheduling weights. The data to be scheduled with larger resource scheduling weight can be preferentially put into the scheduling message queue, and the data to be scheduled with smaller resource scheduling weight is then put into the scheduling message queue.
In practical application, the entity data a, the entity data B, and the entity data C for executing the same task are taken as examples of the multiple different data to be scheduled, and the resource scheduling weights of the entity data a, the entity data B, and the entity data C are 3, 2, and 1, respectively. For ease of understanding, the number of single-time permitted puts and the order of putting when entity data a, entity data B, and entity data C are respectively put into the scheduling message queue are shown below in the form of table 1.
TABLE 1
A A A B B C
With reference to table 1, if the resource scheduling weights of the entity data a, the entity data B, and the entity data C are 3, 2, and 1, respectively, the number of single allowed entries when the entity data a, the entity data B, and the entity data C are respectively placed in the scheduling message queue may be 3, 2, and 1, and the placement order is the entity data a, the entity data B, and the entity data C in sequence. In addition, the embodiment of the present application may not be specifically limited in terms of the representation form of the number of single allowed entries and the entry order when a plurality of different data to be scheduled are respectively placed in the scheduling message queue.
S222: and the data scheduling end determines the current waiting putting quantity corresponding to various different data to be scheduled respectively according to the waiting scheduling quantity and the single allowable putting quantity.
For the determination method of the number of the multiple scheduling data respectively corresponding to the current waiting time, the embodiment of the present application may not be specifically limited, and for convenience of understanding, the following is combined with one possible implementation manner.
As a possible implementation manner, S222 may specifically include: when the number to be scheduled is larger than or equal to the single-time allowable putting number, the single-time allowable putting number is used as the current number to be put by the data scheduling end; and when the quantity to be scheduled is less than the single allowable input quantity, the data scheduling end takes the quantity to be scheduled as the current allowable input quantity. With reference to the example described in table 1, the number of single allowed entries when entity data a, entity data B, and entity data C are respectively placed in the scheduling message queue is 3, 2, and 1. Correspondingly, if the numbers to be scheduled corresponding to the entity data a, the entity data B and the entity data C are 1, 3 and 1, respectively, the determined numbers to be put at this time are 1, 2 and 1, respectively, based on the size relationship between the number allowed to be put at a time and the number to be scheduled.
S223: and the data scheduling end determines the data to be put into the scheduling message queue from a plurality of different data to be scheduled according to the current data to be put into quantity and the order.
For example, the placing order of the entity data a, the entity data B and the entity data C is the entity data a, the entity data B and the entity data C, and the number of the entity data a, the entity data B and the entity data C to be placed at this time is 2, 2 and 1 respectively. Based on this, for ease of understanding, the scheduling message queue into which the data to be put has been put is shown in the form of table 2 below.
TABLE 2
A A B B C
S224: when at least one kind of data which is not put into the scheduling message queue exists in the various different data to be scheduled, the data scheduling end takes the at least one kind of data which is not put into the scheduling message queue as new data to be scheduled to re-determine the number of the data to be put into the scheduling message queue, determines the new data to be put into the scheduling message queue from the new data to be scheduled according to the re-determined number and the order of the data to be put into the scheduling message queue, and executes in a circulating mode until the various different data to be scheduled are all put into the scheduling message queue.
In connection with the example described in table 1, the resource scheduling weights of the entity data a, the entity data B, and the entity data C are 3, 2, and 1, respectively, the number of single allowed entries when the entity data a, the entity data B, and the entity data C are placed in the scheduling message queue may be 3, 2, and 1, and the placing order is the entity data a, the entity data B, and the entity data C in sequence. If the number to be scheduled corresponding to the entity data a, the entity data B and the entity data C is 4, 5 and 1, respectively, the data put into the scheduling message queue for the first time are 3 entity data a, 2 entity data B and 1 entity data C in sequence. Further, since the data to be scheduled that is not put into the scheduling message queue further includes 1 entity data a and 3 entity data B, the operation of putting into the scheduling message queue can be re-executed with the above 1 entity data a and 3 entity data B as new data to be scheduled, and accordingly, the data put into the message queue for the second time is 1 entity data a and 2 entity data B in sequence. At this time, the data to be scheduled, which is not put into the scheduling message queue, further includes 1 entity data B, and therefore, the entity data B may continue to be used as new data to be scheduled and the operation of putting into the scheduling message queue may be re-executed. Thus, after three times of putting operation, various different data to be scheduled are all put into the scheduling message queue. Corresponding to this, for the convenience of understanding, the following shows in the form of table 3 that data to be scheduled is sequentially put in the scheduling message queue.
TABLE 3
A A A B B C A B B B
As can be seen from the above related contents of S221-224, the data to be scheduled may be placed in the scheduling message queue through one or more rounds of cycles based on the respective resource scheduling weights and the number to be scheduled of the data to be scheduled. The data to be scheduled in the scheduling message queue is put in the order constructed based on the respective resource scheduling weights and the number to be scheduled of the various different data to be scheduled, so that the data to be scheduled can be called based on the resource scheduling weights after the data processing end sequentially acquires the data to be scheduled from the scheduling message queue, thereby realizing concurrent access of resources, realizing reasonable allocation of the resources and further improving the execution efficiency of the distributed architecture.
Based on the data scheduling method provided by the above embodiment, the embodiment of the present application further provides a data scheduling apparatus. The data scheduling apparatus is described below with reference to the embodiments and the drawings.
Fig. 4 is a schematic structural diagram of a data scheduling apparatus according to an embodiment of the present application. Referring to fig. 4, the data scheduling apparatus 400 provided in this embodiment of the present application may be deployed at a data scheduling end. The data scheduling apparatus 400 may include:
the information obtaining module 401 is configured to determine resource scheduling weights of multiple different data to be scheduled, and obtain respective corresponding to-be-scheduled quantities of the multiple different data to be scheduled;
and the scheduling message queue determining module 402 is configured to sequentially place multiple different data to be scheduled into the scheduling message queue according to the resource scheduling weight and the number of data to be scheduled, so that at least one data processing end sequentially obtains the multiple different data to be scheduled from the scheduling message queue and calls the data.
As an embodiment, in order to implement reasonable allocation of resources, the scheduling message queue determining module 402 may specifically include:
the first determining module is used for determining the single allowable putting quantity and the putting sequence when various different data to be scheduled are respectively put into the scheduling message queue according to the resource scheduling weight;
the second determining module is used for determining the current to-be-put quantity corresponding to various different to-be-scheduled data according to the to-be-scheduled quantity and the single allowable to-be-put quantity;
the third determining module is used for determining data to be put into the scheduling message queue from a plurality of different data to be scheduled according to the current data to be put into quantity and the order;
and the fourth determining module is used for re-determining the current waiting quantity of the data which are not put into the scheduling message queue as new data to be scheduled when at least one kind of data which are not put into the scheduling message queue exist in the various different data to be scheduled, determining the new data to be put into the scheduling message queue from the new data to be scheduled according to the re-determined current waiting quantity and putting sequence, and circularly executing until the various different data to be scheduled are all put into the scheduling message queue.
As an embodiment, in order to implement reasonable allocation of resources, the second determining module may specifically be configured to:
when the number to be scheduled is larger than or equal to the single allowable putting number, taking the single allowable putting number as the current standby putting number;
and when the quantity to be scheduled is less than the single allowable put-in quantity, taking the quantity to be scheduled as the current pending put-in quantity.
As an implementation manner, in order to achieve reasonable allocation of resources, the multiple different data to be scheduled are multiple different entity data that execute the same task. Correspondingly, the information obtaining module 401 may specifically include:
the data volume acquisition module is used for acquiring the data volumes of various different entity data for executing the same task;
the first determining module is used for determining the resource scheduling weight according to the data volume of various different entity data executing the same task.
As an implementation manner, in order to achieve reasonable allocation of resources, the multiple different data to be scheduled are multiple different entity data for executing different tasks. Correspondingly, the information obtaining module 401 may specifically include:
the incidence relation determining module is used for determining the incidence relation between the complexity of different tasks and the data volume of various different entity data according to the processing time of various different entity data for executing different tasks;
and the second determining module is used for determining the resource scheduling weight based on the incidence relation.
Based on the data scheduling method provided by the above embodiment, the embodiment of the present application further provides a data scheduling apparatus. The data scheduling apparatus is described below with reference to the embodiments and the drawings.
Fig. 5 is a schematic structural diagram of a data scheduling apparatus according to an embodiment of the present application. Referring to fig. 5, a data scheduling apparatus 500 provided in this embodiment of the present application may be deployed at a data processing end. The data scheduling apparatus 500 may include:
a to-be-scheduled data calling module 501, configured to sequentially obtain multiple different to-be-scheduled data from the scheduling message queue and call the multiple different to-be-scheduled data; and the data scheduling end sequentially puts the various data to be scheduled into a scheduling message queue according to the determined resource scheduling weight of the various data to be scheduled and the obtained quantity to be scheduled corresponding to the various data to be scheduled.
As an embodiment, in order to achieve reasonable allocation of resources, the data scheduling apparatus 500 may further include:
the election module is used for carrying out election when the quantity of the acquired data to be scheduled is greater than or equal to a set threshold value if at least one type of data to be scheduled changes; and electing to select one of the at least one data processing terminal as a new data scheduling terminal, and putting the changed at least one type of data to be scheduled into a scheduling message queue by the new data scheduling terminal.
Embodiments of the present application may also provide a data scheduling system, and refer to fig. 1 and the above-described system embodiments for technical details. In the data scheduling system, corresponding data scheduling devices may be deployed at the data scheduling end and the data processing end, respectively, and the data processing end and the data scheduling end may implement the above-described method embodiments together in an interactive manner.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for scheduling data, comprising:
a data scheduling end determines resource scheduling weights of various different data to be scheduled and acquires the quantity of the data to be scheduled corresponding to the various different data to be scheduled;
and the data scheduling end sequentially puts the various different data to be scheduled into a scheduling message queue according to the resource scheduling weight and the quantity to be scheduled, so that at least one data processing end sequentially obtains the various different data to be scheduled from the scheduling message queue and calls the data.
2. The method of claim 1, wherein the data scheduling end sequentially puts the plurality of different data to be scheduled into a scheduling message queue according to the resource scheduling weight and the number to be scheduled, and the method comprises:
the data scheduling end determines the single allowable putting quantity and putting sequence when the various different data to be scheduled are respectively put into the scheduling message queue according to the resource scheduling weight;
the data scheduling end determines the current number to be put in corresponding to the various different data to be scheduled respectively according to the number to be scheduled and the number to be put in allowed for one time;
the data scheduling end determines to-be-put data to be put into the scheduling message queue from the various different to-be-scheduled data according to the current to-be-put quantity and the putting sequence;
when at least one kind of data which is not put into the scheduling message queue exists in the various different data to be scheduled, the data scheduling end takes the at least one kind of data which is not put into the scheduling message queue as new data to be scheduled to re-determine the number of the data to be put into the scheduling message queue, determines new data to be put into the scheduling message queue from the new data to be scheduled according to the re-determined number of the data to be put into the scheduling message queue and the putting sequence, and executes in a circulating mode until the various different data to be scheduled are all put into the scheduling message queue.
3. The method according to claim 2, wherein the determining, by the data scheduling terminal, the current number of pending entries corresponding to the plurality of different types of data to be scheduled according to the number of pending entries and the number of pending entries allowed for one time includes:
when the quantity to be scheduled is larger than or equal to the single-time allowed putting quantity, the data scheduling end takes the single-time allowed putting quantity as the current quantity to be put;
and when the quantity to be scheduled is smaller than the single allowable input quantity, the data scheduling end takes the quantity to be scheduled as the current allowable input quantity.
4. The method of claim 1, wherein the plurality of different data to be scheduled are a plurality of different entity data for executing a same task;
the data scheduling end determines resource scheduling weights of various different data to be scheduled, and the method comprises the following steps:
the data scheduling end acquires the data volume of the multiple different entity data executing the same task;
and the data scheduling end determines the resource scheduling weight according to the data volume of the various different entity data executing the same task.
5. The method of claim 1, wherein the plurality of different data to be scheduled are a plurality of different entity data for performing different tasks;
the data scheduling end determines resource scheduling weights of various different data to be scheduled, and the method comprises the following steps:
the data scheduling end determines the incidence relation between the complexity of different tasks and the data quantity of various different entity data according to the processing duration of various different entity data for executing different tasks;
and the data scheduling end determines the resource scheduling weight based on the incidence relation.
6. A method for scheduling data, comprising:
at least one data processing terminal obtains various different data to be scheduled from the scheduling message queue in sequence and calls the data; and the data scheduling end sequentially puts the various data to be scheduled into the scheduling message queue according to the determined resource scheduling weight of the various data to be scheduled and the obtained quantity to be scheduled corresponding to the various data to be scheduled respectively.
7. The method of claim 6, further comprising:
if at least one kind of data to be scheduled changes, the at least one data terminal elects when the quantity of the acquired data to be scheduled is larger than or equal to a set threshold value; and the election is used for selecting one of the at least one data processing terminal as a new data scheduling terminal, and the new data scheduling terminal puts the changed at least one data to be scheduled into the scheduling message queue.
8. A data scheduling apparatus, applied to a data scheduling end, the apparatus comprising:
the information acquisition module is used for determining the resource scheduling weight of various different data to be scheduled and acquiring the quantity to be scheduled corresponding to the various different data to be scheduled respectively;
and the scheduling message queue determining module is used for sequentially putting the various different data to be scheduled into the scheduling message queue according to the resource scheduling weight and the quantity to be scheduled, so that at least one data processing end sequentially obtains the various different data to be scheduled from the scheduling message queue and calls the data.
9. A data scheduling apparatus, applied to a data processing side, the apparatus comprising:
the data to be scheduled calling module is used for acquiring various different data to be scheduled from the scheduling message queue in sequence and calling the data; and the data scheduling end sequentially puts the various data to be scheduled into the scheduling message queue according to the determined resource scheduling weight of the various data to be scheduled and the obtained quantity to be scheduled corresponding to the various data to be scheduled respectively.
10. A data scheduling system, comprising:
the system comprises a data scheduling end and at least one data processing end;
the data scheduling end is used for determining resource scheduling weights of various different data to be scheduled and acquiring the quantity to be scheduled corresponding to the various different data to be scheduled respectively; according to the resource scheduling weight and the number to be scheduled, putting the various different data to be scheduled into a scheduling message queue;
and the data processing terminal is used for acquiring the various different data to be scheduled from the scheduling message queue and calling the data.
CN202210618417.9A 2022-06-01 2022-06-01 Data scheduling method, device and system Pending CN114968586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210618417.9A CN114968586A (en) 2022-06-01 2022-06-01 Data scheduling method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210618417.9A CN114968586A (en) 2022-06-01 2022-06-01 Data scheduling method, device and system

Publications (1)

Publication Number Publication Date
CN114968586A true CN114968586A (en) 2022-08-30

Family

ID=82959113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210618417.9A Pending CN114968586A (en) 2022-06-01 2022-06-01 Data scheduling method, device and system

Country Status (1)

Country Link
CN (1) CN114968586A (en)

Similar Documents

Publication Publication Date Title
US20200285508A1 (en) Method and Apparatus for Assigning Computing Task
CN110008018A (en) A kind of batch tasks processing method, device and equipment
CN109885310A (en) A kind of method and device reducing mobile phone games Shader module EMS memory occupation
CN112132674A (en) Transaction processing method and device
CN115794262A (en) Task processing method, device, equipment, storage medium and program product
CN111858586A (en) Data processing method and device
CN113297188B (en) Data processing method and device
US8229946B1 (en) Business rules application parallel processing system
CN114968586A (en) Data scheduling method, device and system
CN110489392A (en) Data access method, device, system, storage medium and equipment between multi-tenant
CN111190910A (en) Quota resource processing method and device, electronic equipment and readable storage medium
CN114169733A (en) Resource allocation method and device
CN115220908A (en) Resource scheduling method, device, electronic equipment and storage medium
CN115220887A (en) Processing method of scheduling information, task processing system, processor and electronic equipment
CN114036180A (en) Report generation method, device, equipment and storage medium
CN109460397B (en) Data output control method and device, storage medium and electronic equipment
CN112613955A (en) Order processing method and device, electronic equipment and storage medium
CN113377652A (en) Test data generation method and device
CN113760524A (en) Task execution method and device
CN110705884B (en) List processing method, device, equipment and storage medium
CN112634010A (en) Fund preallocation processing method, device, electronic equipment and medium
CN112001787A (en) User distribution method, device, server and storage medium
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
US20240028392A1 (en) Batch computing system and associated method
CN111625524B (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination