CN117201496A - Task scheduling method, task submitting method, device, equipment and medium - Google Patents

Task scheduling method, task submitting method, device, equipment and medium Download PDF

Info

Publication number
CN117201496A
CN117201496A CN202210600795.4A CN202210600795A CN117201496A CN 117201496 A CN117201496 A CN 117201496A CN 202210600795 A CN202210600795 A CN 202210600795A CN 117201496 A CN117201496 A CN 117201496A
Authority
CN
China
Prior art keywords
data
task
scheduling
layer
scheduling node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210600795.4A
Other languages
Chinese (zh)
Inventor
徐照淼
马斌山
曹铭斌
马国俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210600795.4A priority Critical patent/CN117201496A/en
Publication of CN117201496A publication Critical patent/CN117201496A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure relates to a task scheduling method, a task submitting method, a device, equipment and a medium, wherein the task scheduling method is applied to a scheduling layer of a server, the server also comprises a data layer, a registration center and a routing layer, and the method comprises the following steps: acquiring data distribution information sent by a registration center, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of a data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node; and controlling the master dispatching node of each dispatching node group to read task data of the corresponding data partition from the data layer based on the data distribution information, and submitting the task to the routing layer. According to the embodiment of the disclosure, the plurality of task data can be scheduled simultaneously in the plurality of scheduling node groups, and each scheduling node group is provided with the main-standby architecture, so that the main-standby multi-activity strategy is realized, the high availability of the scheduling nodes is ensured, and the success rate of task scheduling is improved.

Description

Task scheduling method, task submitting method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a task scheduling method, a task submitting method, a device, equipment and a medium.
Background
With the continuous development of internet technology and data processing technology, task processing in many business scenarios has the characteristics of large task amount and high concurrency in a short time, such as task backtracking.
In the normal task processing process, only a single node performs task scheduling work at a single moment, and under the condition of higher concurrent number of tasks, the single node operates to have performance bottleneck. At present, the problem of single node can be solved by setting a plurality of nodes to schedule tasks in parallel, but the availability of each node can not be ensured, so that partial task blocking can not be normally submitted.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a task scheduling method, a task submitting method, a device, equipment and a medium.
The embodiment of the disclosure provides a task scheduling method, which is applied to a scheduling layer of a server, wherein the server further comprises a data layer, a registration center and a routing layer, and the method comprises the following steps:
acquiring data distribution information sent by the registry, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node;
And controlling the master scheduling node of each scheduling node group to read task data of the corresponding data partition from the data layer based on the data allocation information, and submitting tasks to the routing layer.
The embodiment of the disclosure also provides a task submitting method, which is applied to the client and comprises the following steps:
acquiring a plurality of task data;
dividing the task data into a plurality of batch task data, carrying out flow limiting processing on the batch task data in a sliding time window mode, and then submitting the batch task data to a data layer of a server so that a dispatching layer of the server reads the task data from the data layer, wherein the dispatching layer comprises a plurality of dispatching node groups, and each dispatching node group consists of a main dispatching node and a standby dispatching node.
The embodiment of the disclosure also provides a task scheduling device, which is arranged on a scheduling layer of a server, wherein the server further comprises a data layer, a registration center and a routing layer, and the device comprises:
the distribution module is used for acquiring data distribution information sent by the registration center, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node;
And the scheduling module is used for controlling the main scheduling node of each scheduling node group to read task data of the corresponding data partition from the data layer based on the data allocation information and submitting the task to the routing layer.
The embodiment of the disclosure also provides a task submitting apparatus, which is arranged at a client and comprises:
the data module is used for acquiring a plurality of task data;
the system comprises a submitting module, a data layer and a scheduling layer, wherein the submitting module is used for dividing the task data into a plurality of batch task data, carrying out current limiting processing on the batch task data in a sliding time window mode, and then submitting the batch task data to the data layer of a server so that the scheduling layer of the server reads the task data from the data layer, and the scheduling layer comprises a plurality of scheduling node groups, wherein each scheduling node group consists of a main scheduling node and a standby scheduling node.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement a method as provided by an embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the task scheduling scheme provided by the embodiment of the disclosure, data distribution information sent by a registration center is obtained through a scheduling layer of a server, the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node; and controlling the master dispatching node of each dispatching node group to read task data of the corresponding data partition from the data layer based on the data distribution information, and submitting the task to the routing layer. By adopting the technical scheme, the scheduling nodes of the scheduling layer are grouped and the task data of the data layer are partitioned, so that a plurality of task data can be scheduled in a plurality of scheduling node groups at the same time, the problem of single node performance bottleneck is solved, and as each scheduling node group is provided with a main-standby architecture, the main-standby multi-activity strategy is realized, the high availability of the scheduling nodes is ensured, and the success rate of task scheduling is further improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a task scheduling method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a task schedule provided by an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a task submission method according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a task submission provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an overall architecture of task processing provided by an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a task scheduling device according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a task submission apparatus according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
In the related art, the task scheduling methods generally include two types: the method comprises the steps of checking configuration items in a timing task mode to determine whether a task is triggered or not, specifically, a crontab command or a frame can be utilized to realize the timing task, the former operates on a single server, the capacity is limited, the use threshold is high, when the frame is used, the latter modifies timing task configuration information each time, reissue is needed, and the flow is complicated. The other is to check the task configuration information to generate a corresponding task by introducing a master-slave distributed scheduling node, however, only a single node performs task scheduling work at a single moment, and the other node is in a preparation state, so that the expansion capacity of the scheduling node is limited, the processing capacity per second (TransactionPerSecond, TPS) of the scheduling node is provided with a bottleneck, a large number of requests are generated instantaneously for a scene with large and high concurrency task capacity in a short time, if the requests are submitted to the scheduling node at the same time, the scheduling node is blocked or the tasks are discarded, and cannot be submitted to a subsequent execution node, and further the task execution failure is caused.
At present, the problem of single node can be solved by setting a plurality of nodes to schedule tasks in parallel, but the availability of each node can not be ensured, so that partial task blocking can not be normally submitted. In order to solve the above-mentioned problems, embodiments of the present disclosure provide a task scheduling method and a task submitting method, and the method is described below with reference to specific embodiments.
Fig. 1 is a flow chart of a task scheduling method according to an embodiment of the present disclosure, where the method may be performed by a task scheduling device, and the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method is applied to a scheduling layer of a server, and specifically includes:
step 101, acquiring data distribution information sent by a registry, wherein the data distribution information comprises scheduling node groups corresponding to a plurality of data partitions of a data layer respectively, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node.
The task scheduling method of the embodiment of the disclosure can be suitable for task processing scenes with the characteristics of large task quantity and concentrated task processing in a short time, such as task backtracking, which can be understood as performing a re-running operation on historical tasks in a specific time interval, and is generally suitable for batch execution of wrong tasks, batch update of task results after task configuration modification, batch retry of tasks after data update and the like.
The server may be an electronic device that obtains task data from a client and performs task processing, and in the embodiment of the present disclosure, the server may include a scheduling layer, a data layer, a registry, and a routing layer, where the scheduling layer may be used for task management and scheduling work; the data layer is used for storing various data related to the task, including but not limited to data such as task name, task period, task result, task execution state and the like; the registry may be a function module which is newly added in the embodiment of the present disclosure and can interact with the data layer and the scheduling layer, and stores related information of the data layer and performs data allocation for the scheduling layer to obtain data allocation information; the routing layer may be used for distribution of tasks.
The scheduling layer in the embodiment of the present disclosure may set a plurality of scheduling node groups, each including a primary scheduling node and a backup scheduling node. In some embodiments, the task scheduling method may further include: and setting a distributed lock for each scheduling node group through a distributed coordination system so as to carry out a main and standby election mode, wherein the main and standby election mode represents that only one of the main scheduling node and the standby scheduling node is in a working state. A scheduling node (Scheduler) may be a specific device for task scheduling.
The distributed coordination system can be realized by adopting a Zookeeper, a distributed lock can be arranged for each dispatching node group through the Zookeeper, so that a main alternative mode is realized, namely, after the main dispatching node in each dispatching node group breaks down, failure transfer (Failover) can be performed in real time, the standby dispatching node is switched into the main dispatching node to take over for work, and the main alternative mode ensures the high availability of each dispatching node group, thereby being beneficial to improving the success rate of the follow-up task dispatching.
In some embodiments, the registry may store information of a plurality of data partitions of the data layer, and the task scheduling method may further include, before performing step 101 above: and controlling the master dispatching node of each dispatching node group to send a registration request to a registration center so that the registration center allocates a corresponding data partition for each dispatching node group based on the partition information of the plurality of data partitions, and storing the data allocation information.
The task data in the data layer may be logically divided into a plurality of data partitions (data partition) for storage, and the specific division manner is not limited, for example, may be a hash manner. The registry may obtain partition information of the data partitions of the data layer in advance, and the partition information may include the number of the data partitions, and the like. The data partitions in the data layer are obtained by dividing task data in the data layer, wherein the task data is submitted to the data layer after the client divides the task data into a plurality of batch task data and then carries out current limiting processing on the batch task data by adopting a sliding time window mode.
The scheduling layer can control the master scheduling node of each scheduling node group to send a registration request to the registration center, the registration center can allocate the corresponding data partition for each scheduling node group according to the partition information of the data partition in the data layer, the specific allocation mode is not limited, for example, the mode of sequential allocation, reverse allocation or random allocation can be adopted, the corresponding relation between the data partition and the scheduling node group can be stored to obtain the data allocation information after allocation, each data partition corresponds to one scheduling node group, and after one data partition is allocated, the registration center can not allocate the data partition to other scheduling node groups.
Specifically, before the data of the data layer is read, the scheduling layer may first acquire data allocation information from the registry, so as to determine, according to the data allocation information, a data partition corresponding to each scheduling node group.
Step 102, controlling the master dispatching node of each dispatching node group to read the task data of the corresponding data partition from the data layer based on the data distribution information, and submitting the task to the routing layer.
Specifically, the scheduling layer may control the master scheduling node in each scheduling node group to read task data of its own corresponding data partition from the data layer, generate task detailed information, store the task detailed information to the data layer, submit the task to the routing layer, the routing layer may distribute the task to the execution layer for task calculation, and then store the calculation result and the task execution state to the data layer.
According to the task scheduling scheme provided by the embodiment of the disclosure, data distribution information sent by a registration center is obtained through a scheduling layer of a server, the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node; and controlling the master dispatching node of each dispatching node group to read task data of the corresponding data partition from the data layer based on the data distribution information, and submitting the task to the routing layer. By adopting the technical scheme, the scheduling nodes of the scheduling layer are grouped and the task data of the data layer are partitioned, so that a plurality of task data can be scheduled in a plurality of scheduling node groups at the same time, the problem of single node performance bottleneck is solved, and as each scheduling node group is provided with a main-standby architecture, the main-standby multi-activity strategy is realized, the high availability of the scheduling nodes is ensured, and the success rate of task scheduling is further improved.
In some embodiments, the registry is further configured to monitor the operation status of each scheduling node group by means of heartbeat keep-alive, and switch its data partition to other scheduling node groups in the scheduling layer when it is determined that an abnormality occurs in one scheduling node group.
The registry monitors the operation state of each scheduling node group in a heartbeat keep-alive mode, which can be understood that the registry can send periodic heartbeat signals to each scheduling node group, when a reply signal of one scheduling node group is not received, the registry indicates that a plurality of nodes in the scheduling node group have faults, the occurrence of the abnormality of the scheduling node group can be determined, the data partition of the scheduling node group is distributed to other scheduling node groups which have no abnormality in a scheduling layer, and the data distribution information is modified, so that the other scheduling node groups process the data partition.
In the scheme, the abnormal scheduling node group can be found in time in a heartbeat keep-alive mode, and the data partition of the abnormal scheduling node group is redistributed to other scheduling node groups, so that the problem that the task is blocked and cannot be submitted normally due to the failure of the scheduling node is avoided, and the success rate of task scheduling is further improved.
In some embodiments, the server further includes an execution layer, and the routing layer distributes the tasks to the execution layer for calculation after acquiring the tasks submitted by the scheduling layer through the message queue.
The routing layer can be composed of a message queue, the message queue is a message device integrating receiving and releasing of messages, a producer and a consumer of the messages can be decoupled, and the routing layer has various characteristics of single consumption, data caching and the like, and can ensure that tasks are consumed by nodes of the execution layer in a single time. The routing layer ensures that submitted tasks can be normally distributed and executed by introducing a message queue as a message server (browser), and realizes peak clipping and valley filling of task quantity processing.
Exemplary, fig. 2 is a schematic diagram of task scheduling provided by an embodiment of the present disclosure, as shown in fig. 2, where solid arrows represent control flows, dashed arrows represent data flows, a scheduling layer may include three scheduling node groups, such as a group 1, a group 2, and a group 3 in the drawing, each scheduling node group includes a primary scheduling node and a standby scheduling node, and each scheduling node group sets a lock path through a Zookeeper; the task data in the data layer are divided into three data partitions, such as a data partition 1, a data partition 2 and a data partition 3 in the figure; after the registration center distributes the data partitions for each scheduling node group, the data partition 1 corresponds to the data partition 1, the data partition 2 corresponds to the data partition 2, the data partition 3 corresponds to the data partition 3, and the master scheduling node of each scheduling node group can read the corresponding data partition from the data layer and submit the task to the routing layer.
The task scheduling process is described next by way of a specific example. Assuming that 100 task data numbered 1-100 are stored in a value data layer, the data layer may logically divide the task data into n (e.g., 3) data partitions according to the number, and the division rule may be: the number is used for taking the remainder of 3, when the remainder is 1, the logical partition of the task data is the data partition 1, when the remainder is 2, the logical partition corresponds to the data partition 2, and when the remainder is 0, the logical partition corresponds to the data partition 3. The registry may obtain information such as the number and number of data partitions of the data layer for subsequent use. It is assumed that the scheduling layer includes m scheduling node groups, where m is generally the same as n, or satisfies n% m=0, each scheduling node group includes two active and standby scheduling nodes, and a active and standby state is provided between the two scheduling nodes, and each scheduling node group records its own packet identifier (id). After the main dispatching node of each dispatching node group is initialized, the self grouping identification, such as grouping 1, is registered in the registration center, and the registration center is reported that the group service is started and the grouping identification is 1; the registry may then assign the already registered data partition 1 to dispatch packet 1 for processing. Similarly, the registry may allocate a data partition for each scheduling node group, and the master scheduling node of each scheduling node group may read the corresponding data partition from the data layer and submit the task to the routing layer.
The task scheduling scheme of the embodiment of the disclosure realizes a scheduling mechanism of multiple activities of scheduling nodes based on data partitioning, simultaneously groups the scheduling nodes and partitions task data, realizes task scheduling of multiple scheduling nodes on multiple data partitioning, and supports transverse expansion of the scheduling nodes; and each scheduling node group realizes a single main and standby multi-activity strategy, when a main scheduling node in a certain scheduling node group fails, the standby scheduling node is switched into the main scheduling node and is connected with a scheduling work in parallel, so that the performance bottleneck problem of a scheduling layer is solved, the availability of the scheduling node is improved, and the success rate of task scheduling is further improved.
Fig. 3 is a flow chart of a task submission method provided by an embodiment of the disclosure, which may be performed by a task submission apparatus, where the apparatus may be implemented in software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 3, the method is applied to a client, and includes:
step 301, acquiring a plurality of task data.
The task data may be data received by the client and corresponding to different task requirements, for example, the task data may be tasks to be traced for a task tracing scene, which is not particularly limited
Specifically, in response to a triggering operation of a user at a client, the client may receive a plurality of task data, and the number of task data is not limited, and is generally larger.
Step 302, dividing the plurality of task data into a plurality of batch task data, performing current limiting processing on the plurality of batch task data by adopting a sliding time window mode, and submitting the batch task data to a data layer of the server so as to enable a scheduling layer of the server to read the task data from the data layer.
The scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node so as to ensure the high availability of the scheduling nodes.
Batch task data may be combined from a plurality of task data. In the embodiment of the disclosure, after acquiring the plurality of task data, the client may divide the plurality of task data into a plurality of batch task data according to time, for example, may divide task data with the same commit time into one batch task data, and commit times of different batch task data may be different. Each batch task data has corresponding information such as submission time, batch task number, batch task configuration identification, task period and the like. And then, the client can conduct current limiting processing on the plurality of batch task data in a sliding time window mode, and then submits batch task data meeting the submitting conditions to the data layer, so that the scheduling layer reads the task data from the data layer to conduct subsequent scheduling.
In the embodiment of the present disclosure, performing current limiting processing on a plurality of batch task data by adopting a sliding time window manner may include: inputting a plurality of batch task data one by one into a sliding time window; determining whether the batch of task data meets the submitting condition according to the window duration and the window size of the sliding time window for each batch of task data, and if so, submitting the batch of task data; if not, the batch of task data is re-determined in a reflux way.
The window duration of the sliding time window may be a duration limit of counting batch task data, for example, may be set to 5 seconds, and the window size may be understood as a limit on the number of batch task data, for example, may be set to 5; the window duration and window size of the sliding time window can be set and adjusted according to service requirements so as to adapt to different service scenes. The plurality of batch task data in the sliding time window can be stored in a manner of a sequencing set (SortedSet), the bottom layer of the SortedSet can be a skip list, the skip list can arrange data according to a certain appointed dimension sequence and record the value of the dimension, and the SortedSet can rapidly acquire the data quantity between two dimension values according to the value of the dimension; in the embodiment of the disclosure, in the sliding time window, the way of sequencing the collection characterizes that batch task data is arranged according to the time dimension sequence and the submitting time is recorded,
The client inputs a plurality of batch task data one by one into a sliding time window, wherein each batch task data carries a commit time, namely a time stamp, when each batch task data is input into the sliding time window, whether the batch task data meets commit conditions or not can be determined according to window duration and window size, and when the batch task data meets commit conditions, the batch task data is committed to a data layer; and when the batch of task data does not meet the commit condition, returning the batch of task data to the reentry sliding time window for determination until the commit condition is met.
Optionally, each batch of task data has a corresponding commit time, and determining whether the batch of task data meets the commit condition according to a window duration and a window size of the sliding time window includes: determining the number of batch task data submitted in a window time period before the submitting time of the batch task data, determining whether the number is smaller than or equal to the window size, and if yes, determining that the batch task data meets the submitting condition; otherwise, determining that the batch of task data does not meet the commit condition, wherein the window size represents a maximum number of batch of task data in the sliding time window.
When a client inputs batch task data into a sliding time window, the client can acquire the historical time of interval window duration before the submitting time of the current batch task data, determine the number of batch task data in two events of the submitting time and the historical time, compare the number with the window size, and if the number is smaller than the window size, determine that the current batch task data meets the submitting condition; when the number is greater than or equal to the window size, it may be determined that the previous batch of task data does not meet the commit condition, i.e., the batch of task data is not passed by the sliding event window restriction.
For example, fig. 4 is a schematic diagram of task submission provided by the embodiment of the present disclosure, as shown in fig. 4, in which a process of task submission after current limiting in a sliding time window manner is illustrated, the window size of the sliding time window in the figure is 5, the window duration is 5 seconds, the total task in the figure represents a plurality of task data, the total task is divided into 6 batches, that is, 6 batch task data in the figure, from batch task 1 to batch task 6, the submission time of batch task 1 is 100, the submission time of batch task 6 is 105, and then the batch task 6 is submitted according to a time sequence; sequentially submitting, namely firstly inputting batch task 1 into a sliding time window, wherein the number of batch task data in the sliding time window is smaller than 5, the batch task 1 meets the submitting condition, the submitting request passes, the passing batch task 1 is submitted to a data layer, and the state is updated to be submitted; batch tasks 2-5 submitted in subsequent sequence will also pass the request until batch task 6 at commit time 105 enters the sliding time window, since the number of batch task data for the window at this time is equal to 5, batch task 6 does not satisfy the commit condition, the commit request does not pass, batch task 6 is subsequently reflowed and resubmitted.
According to the task submitting scheme provided by the embodiment of the disclosure, the client acquires a plurality of task data, divides the plurality of task data into a plurality of batch task data, performs flow limiting processing on the plurality of batch task data in a sliding time window mode, and submits the flow limiting processing to the data layer of the server so that the scheduling layer of the server reads the task data from the data layer. By adopting the technical scheme, when the client submits task data, the client can submit and limit current based on the sliding time window, cut the task data into a plurality of batches for submission according to the sliding time window, reduce the peak value of the task data volume submitted in batches, and further help to reduce the blocking problem in the subsequent task scheduling.
The overall process of task processing is further described below by way of a specific example. By way of example, fig. 5 is a schematic diagram of an overall architecture of task processing provided in an embodiment of the present disclosure, where solid arrows represent control flows and dashed arrows represent data flows, and as shown in fig. 5, the overall architecture of task processing may be composed of a client 501 and a server 502, and the server 502 may include a data layer, a schedule layer, a routing layer, and an execution layer. The client 501 receives a plurality of task data in response to a trigger of a user, and then limits the number of task data submitted simultaneously through a sliding time window, and the limiting magnitude can be configured according to needs to reduce the number of peaks of tasks submitted in batches.
The data layer of the server 502 may store data using MySQL database, and is responsible for storing task metadata information (i.e., task detailed information), including, but not limited to, task name, task period, task result, task execution status, and the like. The scheduling layer is responsible for the management and scheduling work of calculation tasks, and can be composed of a plurality of distributed scheduling node groups, each scheduling node group comprises a main scheduling node and a standby scheduling node, the main scheduling node and the standby scheduling node of each scheduling node group realize distributed locking through a Zookeeper, and the fact that only one node is in a working state is guaranteed to prevent brain fracture; in addition, each scheduling node group reads task data of a corresponding data partition from the data layer, and data reading among different groups is independent.
The routing layer of the server 502 is responsible for distributing tasks, and the routing layer is responsible for receiving task detailed data information submitted by the scheduling layer and uniformly distributing the task information to the computing nodes of the execution layer. The routing layer can be composed of a message queue, the message queue is a message device integrating receiving and releasing of messages, a producer and a consumer of the messages can be decoupled, and the routing layer has various characteristics of single consumption, data caching and the like, and can ensure that tasks are consumed by nodes of the execution layer in a single time. And the execution layer is responsible for the specific execution work of the calculation task and comprises a plurality of execution nodes (workers), and after the execution layer receives the task distributed by the routing layer, the execution layer performs stateless calculation according to the task configuration and updates the information in the data layer according to the calculated task result data and the task execution state.
The task processing architecture can solve problems encountered in the task processing process from multiple dimensions of task submission, task scheduling, task routing, task execution and the like, a client side realizes a submission current limiting strategy based on a sliding time window from the task submission dimension, and massive backtracking tasks are cut into multiple batches of submission according to the time window, so that the number peaks of tasks submitted in batches are reduced; the scheduling layer realizes a multi-activity scheduling scheme of scheduling nodes based on hash fragments from task scheduling dimension, can realize task scheduling of a plurality of scheduling nodes at the same time, supports transverse expansion of the scheduling nodes, and in addition, each scheduling node realizes a master-slave multi-activity strategy, when a certain master node fails, the slave node is switched to the master node and takes over scheduling work, so that the problem of bottleneck on a task scheduling side is solved; the routing layer is used as a message server by introducing a message queue from the task routing dimension, so that submitted tasks can be normally distributed and executed, and peak clipping and valley filling of task quantity processing are realized; the execution layer realizes stateless calculation of submitted tasks by introducing the execution nodes from task execution dimension, the execution nodes acquire task data from the routing layer and then calculate results, and the stateless calculation attribute of the nodes enables the execution nodes to have horizontal capacity expansion capability.
Fig. 6 is a schematic structural diagram of a task scheduling device according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 6, the device is disposed on a scheduling layer of a server, where the server further includes a data layer, a registry and a routing layer, and includes:
the allocation module 601 is configured to obtain data allocation information sent by the registry, where the data allocation information includes a scheduling node group corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, and the scheduling layer includes a plurality of scheduling node groups, and each scheduling node group is composed of a primary scheduling node and a standby scheduling node;
and the scheduling module 602 is configured to control the master scheduling node of each scheduling node group to read task data of a corresponding data partition from the data layer based on the data allocation information, and submit a task to the routing layer.
Optionally, the device further includes an election module, configured to:
and setting a distributed lock for each scheduling node group through a distributed coordination system so as to perform a main and standby election mode, wherein the main and standby election mode represents that only one of the main scheduling node and the standby scheduling node is in a working state.
Optionally, the registry stores information of the plurality of data partitions of the data layer, and the apparatus further includes an allocation module for:
and controlling the master scheduling node of each scheduling node group to send a registration request to the registration center so that the registration center allocates a corresponding data partition for each scheduling node group based on the partition information of the plurality of data partitions, and storing the data allocation information.
Optionally, the registry is further configured to monitor an operation state of each of the scheduling node groups in a heartbeat keep-alive manner, and switch the data partition of one scheduling node group to other scheduling node groups in the scheduling layer when determining that the scheduling node group is abnormal.
Optionally, the server further includes an execution layer, and after the routing layer obtains the task submitted by the scheduling layer through the message queue, the routing layer distributes the task to the execution layer for calculation.
The task scheduling device provided by the embodiment of the disclosure can execute the task scheduling method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of a task submitting apparatus according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 7, the apparatus is disposed at a client, and includes:
A data module 701, configured to obtain a plurality of task data;
and the submitting module 702 is configured to divide the task data into a plurality of batch task data, perform current limiting processing on the batch task data in a sliding time window manner, and then submit the batch task data to a data layer of a server, so that a scheduling layer of the server reads the task data from the data layer, where the scheduling layer includes a plurality of scheduling node groups, and each scheduling node group is composed of a master scheduling node and a standby scheduling node.
Optionally, the submitting module 702 includes:
the input unit is used for inputting the plurality of batch task data into the sliding time window one by one;
the determining unit is used for determining whether the batch of task data meets the submitting conditions according to the window duration and the window size of the sliding time window for each batch of task data, and if so, submitting the batch of task data; if not, the batch of task data is re-determined in a reflux way.
Optionally, each batch of task data has a corresponding commit time, and the determining unit is configured to:
determining the number of batch task data submitted in the window time before the submitting time of the batch task data, determining whether the number is smaller than the window size, if so, determining that the batch task data meets the submitting condition; otherwise, determining that the batch of task data does not meet the submitting condition, wherein the window size represents the maximum number of batch of task data in the sliding time window.
Optionally, the plurality of batch task data in the sliding time window is stored in a manner of a sorting set, and the manner of the sorting set characterizes that the plurality of batch task data is arranged according to a time dimension sequence and the submitting time is recorded.
The task submission device provided by the embodiment of the disclosure can execute the task submission method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Embodiments of the present disclosure also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the task scheduling method and/or the task submitting method provided by any of the embodiments of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now in particular to fig. 8, a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 800 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. When executed by the processing device 801, the computer program performs the above-described functions defined in the task scheduling method and/or the task submitting method of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring data distribution information sent by the registry, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node; and controlling the master scheduling node of each scheduling node group to read task data of the corresponding data partition from the data layer based on the data allocation information, and submitting tasks to the routing layer.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: acquiring a plurality of task data; dividing the task data into a plurality of batch task data, carrying out flow limiting processing on the batch task data in a sliding time window mode, and then submitting the batch task data to a data layer of a server so that a dispatching layer of the server reads the task data from the data layer, wherein the dispatching layer comprises a plurality of dispatching node groups, and each dispatching node group consists of a main dispatching node and a standby dispatching node.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. The task scheduling method is characterized by being applied to a scheduling layer of a server, wherein the server also comprises a data layer, a registration center and a routing layer, and the method comprises the following steps:
acquiring data distribution information sent by the registry, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node;
and controlling the master scheduling node of each scheduling node group to read task data of the corresponding data partition from the data layer based on the data allocation information, and submitting tasks to the routing layer.
2. The method according to claim 1, wherein the method further comprises:
And setting a distributed lock for each scheduling node group through a distributed coordination system so as to perform a main and standby election mode, wherein the main and standby election mode represents that only one of the main scheduling node and the standby scheduling node is in a working state.
3. The method of claim 1, wherein the registry stores information for the plurality of data partitions of the data layer, the method further comprising:
and controlling the master scheduling node of each scheduling node group to send a registration request to the registration center so that the registration center allocates a corresponding data partition for each scheduling node group based on the partition information of the plurality of data partitions, and storing the data allocation information.
4. A method according to claim 3, wherein the registry is further configured to monitor the operation status of each of the scheduling node groups by means of heartbeat keep-alive, and to switch its data partition to other scheduling node groups in the scheduling layer when it is determined that an abnormality occurs in one scheduling node group.
5. The method of claim 1, wherein the server further comprises an execution layer, and the routing layer distributes tasks to the execution layer for calculation after obtaining the tasks submitted by the scheduling layer through a message queue.
6. The task submitting method is characterized by being applied to a client and comprising the following steps of:
acquiring a plurality of task data;
dividing the plurality of task data into a plurality of batch task data, and carrying out current limiting processing on the plurality of batch task data in a sliding time window mode;
and submitting the task data to a data layer of a server so that a scheduling layer of the server reads the task data from the data layer, wherein the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node.
7. The method of claim 6, wherein the limiting the flow of the plurality of batch task data using a sliding time window comprises:
inputting the batch task data one by one into the sliding time window;
determining whether the batch of task data meets the submitting condition according to the window duration and the window size of the sliding time window for each batch of task data, and if so, submitting the batch of task data; if not, the batch of task data is re-determined in a reflux way.
8. The method of claim 7, wherein each batch of task data has a corresponding commit time, wherein determining whether the batch of task data meets commit conditions based on a window duration of the sliding time window and a window size includes:
Determining the number of batch task data submitted in the window time before the submitting time of the batch task data, determining whether the number is smaller than the window size, if so, determining that the batch task data meets the submitting condition; otherwise, determining that the batch of task data does not meet the submitting condition, wherein the window size represents the maximum number of batch of task data in the sliding time window.
9. The method of claim 7, wherein the plurality of batch task data in the sliding time window is stored in a sorted set that characterizes the arrangement of the plurality of batch task data in a time dimension order and records commit time.
10. The utility model provides a task scheduling device which characterized in that sets up the dispatch layer at the server, the server still includes data layer, registry and routing layer, the device includes:
the distribution module is used for acquiring data distribution information sent by the registration center, wherein the data distribution information comprises scheduling node groups respectively corresponding to a plurality of data partitions of the data layer, each data partition corresponds to one scheduling node group, the scheduling layer comprises a plurality of scheduling node groups, and each scheduling node group consists of a main scheduling node and a standby scheduling node;
And the scheduling module is used for controlling the main scheduling node of each scheduling node group to read task data of the corresponding data partition from the data layer based on the data allocation information and submitting the task to the routing layer.
11. A task submission apparatus, configured to a client, comprising:
the data module is used for acquiring a plurality of task data;
the system comprises a submitting module, a data layer and a scheduling layer, wherein the submitting module is used for dividing the task data into a plurality of batch task data, carrying out current limiting processing on the batch task data in a sliding time window mode, and then submitting the batch task data to the data layer of a server so that the scheduling layer of the server reads the task data from the data layer, and the scheduling layer comprises a plurality of scheduling node groups, wherein each scheduling node group consists of a main scheduling node and a standby scheduling node.
12. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor being configured to read the executable instructions from the memory and execute the instructions to implement the method of any of the preceding claims 1-9.
13. A computer readable storage medium, characterized in that the storage medium stores a computer program for executing the method of any of the preceding claims 1-9.
CN202210600795.4A 2022-05-30 2022-05-30 Task scheduling method, task submitting method, device, equipment and medium Pending CN117201496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210600795.4A CN117201496A (en) 2022-05-30 2022-05-30 Task scheduling method, task submitting method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210600795.4A CN117201496A (en) 2022-05-30 2022-05-30 Task scheduling method, task submitting method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117201496A true CN117201496A (en) 2023-12-08

Family

ID=88994757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210600795.4A Pending CN117201496A (en) 2022-05-30 2022-05-30 Task scheduling method, task submitting method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117201496A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453150A (en) * 2023-12-25 2024-01-26 杭州阿启视科技有限公司 Method for implementing multiple instances of video storage scheduling service

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453150A (en) * 2023-12-25 2024-01-26 杭州阿启视科技有限公司 Method for implementing multiple instances of video storage scheduling service
CN117453150B (en) * 2023-12-25 2024-04-05 杭州阿启视科技有限公司 Method for implementing multiple instances of video storage scheduling service

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
CN114020470B (en) Resource allocation method and device, readable medium and electronic equipment
CN112114950A (en) Task scheduling method and device and cluster management system
CN111950988A (en) Distributed workflow scheduling method and device, storage medium and electronic equipment
CN109117252B (en) Method and system for task processing based on container and container cluster management system
CN111427706B (en) Data processing method, multi-server system, database, electronic device and storage medium
CN116166395A (en) Task scheduling method, device, medium and electronic equipment
CN113064744A (en) Task processing method and device, computer readable medium and electronic equipment
CN113553178A (en) Task processing method and device and electronic equipment
CN113722056A (en) Task scheduling method and device, electronic equipment and computer readable medium
CN110673959A (en) System, method and apparatus for processing tasks
CN117201496A (en) Task scheduling method, task submitting method, device, equipment and medium
CN115328741A (en) Exception handling method, device, equipment and storage medium
CN109842500A (en) A kind of dispatching method and system, working node and monitoring node
CN115167992A (en) Task processing method, system, device, server, medium, and program product
US9672073B2 (en) Non-periodic check-pointing for fine granular retry of work in a distributed computing environment
CN115629853A (en) Task scheduling method and device
CN111694672B (en) Resource allocation method, task submission method, device, electronic equipment and medium
CN114035861A (en) Cluster configuration method and device, electronic equipment and computer readable medium
CN114489978A (en) Resource scheduling method, device, equipment and storage medium
CN115599507A (en) Data processing method, execution workstation, electronic device and storage medium
CN113472638A (en) Edge gateway control method, system, device, electronic equipment and storage medium
CN113703945A (en) Scheduling method, device, equipment and storage medium of micro-service cluster
CN115878586B (en) IPFS storage encapsulation method and device, electronic equipment and readable storage medium
CN112148448B (en) Resource allocation method, apparatus, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination