CN114880096A - Task scheduling method and device - Google Patents

Task scheduling method and device Download PDF

Info

Publication number
CN114880096A
CN114880096A CN202210535660.4A CN202210535660A CN114880096A CN 114880096 A CN114880096 A CN 114880096A CN 202210535660 A CN202210535660 A CN 202210535660A CN 114880096 A CN114880096 A CN 114880096A
Authority
CN
China
Prior art keywords
task
data
scheduling mode
processed
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210535660.4A
Other languages
Chinese (zh)
Inventor
陈志鹏
帅红波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202210535660.4A priority Critical patent/CN114880096A/en
Publication of CN114880096A publication Critical patent/CN114880096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a task scheduling method, including: the method comprises the steps of starting a task according to a current scheduling mode, obtaining first data needing to be processed by the task, and processing the first data; setting a target scheduling mode of the task at the next stage according to the data volume of the first data and a preset threshold corresponding to the task; and according to the target scheduling mode, acquiring second data required to be processed by the task, and processing the second data. According to the method, the scheduling mode of the next stage is adjusted according to the size relation between the data volume of the task data acquired at each stage and the preset threshold value, when more task data needing to be processed does not exist, the problem of server performance waste caused by frequently acquiring the task data can be avoided, when more task data needing to be processed exists, the task data can be timely responded by reducing the time interval for acquiring the task data, and task scheduling can be rapidly completed.

Description

Task scheduling method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task scheduling method, apparatus, server, computer-readable storage medium, and computer program product.
Background
With the expansion of banking business range, more and more tasks are required to be processed by the banking business system, and the data volume processed by the tasks is increased sharply. Different tasks have different requirements on real-time performance, for example, an identity verification task in a user account opening service needs to return a verification result in real time, and a compensation settlement task in a salary service does not need to return a processing result in real time.
For tasks with weak real-time requirements, the banking system generally adopts a polling mode to schedule the tasks. And the server performs polling according to a fixed time interval, performs data processing if the polling result is that data which needs to be processed by the task exists, and waits for next polling if the polling result is that the data which needs to be processed by the task does not exist.
However, in the above method, when there is no task data, the polling causes performance waste of the server, and when there is task data, the polling cannot process the task data in time because of a fixed time interval. There is a need in the art to provide a task scheduling method capable of responding to task data in time and saving server performance.
Disclosure of Invention
The invention provides a task scheduling method which can respond to task data in time on the basis of saving the performance of a server and quickly finish task scheduling. The disclosure also provides a device, a server, a computer readable storage medium and a computer program product corresponding to the method.
In a first aspect, the present disclosure provides a task scheduling method. The method comprises the following steps:
the method comprises the steps of starting a task according to a current scheduling mode, obtaining first data needing to be processed by the task, and processing the first data;
setting a target scheduling mode of the task at the next stage according to the data volume of the first data and a preset threshold corresponding to the task;
and according to the target scheduling mode, acquiring second data required to be processed by the task, and processing the second data.
In some possible implementation manners, the setting a target scheduling mode of a next stage of the task according to the data amount of the first data and a preset threshold corresponding to the task includes:
when the data volume of the first data is smaller than or equal to a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a first scheduling mode, wherein the first scheduling mode indicates that data required to be processed by the task is acquired at a first time interval;
and when the data volume of the first data is larger than a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a second scheduling mode, wherein the second scheduling mode indicates that data required to be processed by the task is acquired according to a second time interval, and the second time interval is smaller than the first time interval.
In some possible implementations, when the task supports concurrency, the first scheduling mode further indicates: processing data by adopting a first number of threads;
the second scheduling mode further indicates: and processing data by adopting a second number of threads, wherein the first number is smaller than the second number.
In some possible implementations, the second number is no greater than twice the number of Central Processing Units (CPUs).
In some possible implementations, the method further includes:
and pre-configuring the attributes of the tasks, wherein the attributes comprise one or more of concurrency support, new task starting after the last task is completed, first time interval, second time interval and task name.
In some possible implementations, the data sources that the task needs to process are non-uniformly and discretely distributed, the data sources including the first data and the second data.
In a second aspect, the present disclosure provides a task scheduling apparatus. The device comprises:
the scheduling module is used for scheduling the tasks according to the current scheduling mode, acquiring first data to be processed by the tasks and processing the first data;
the setting module is used for setting a target scheduling mode of the next stage of the task according to the data volume of the first data and a preset threshold corresponding to the task;
and the scheduling module is further configured to acquire second data to be processed by the task according to the target scheduling mode, and process the second data.
In a third aspect, the present disclosure provides a server. The server comprises a processor and a memory, the memory having instructions stored therein, the processor executing the instructions to cause the server to perform the method according to the first aspect of the present disclosure or any implementation manner of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium. The computer readable storage medium has stored therein instructions that, when executed on a server, cause the server to perform the method of the first aspect or any of the implementation manners of the first aspect.
In a fifth aspect, the present disclosure provides a computer program product. The computer program product comprises computer readable instructions which, when run on a server, cause the server to perform the method of the first aspect or any of the implementations of the first aspect described above.
The present disclosure may be further combined to provide further implementations on the basis of the implementations provided by the above aspects.
Based on the above description, it can be seen that the technical solution of the present disclosure has the following beneficial effects:
specifically, the method includes the steps of starting a task according to a current scheduling mode, acquiring first data needing to be processed by the task and processing the first data, setting a target scheduling mode of a next stage of the task according to the data volume of the acquired first data and a preset threshold corresponding to the task, and acquiring second data needing to be processed by the task and processing the second data according to the target scheduling mode. According to the method, the scheduling mode of the next stage is adjusted according to the size relation between the data volume of the task data acquired at each stage and the preset threshold value, when more task data needing to be processed does not exist, the problem of server performance waste caused by frequently acquiring the task data can be avoided, when more task data needing to be processed exists, the task data can be timely responded by reducing the time interval for acquiring the task data, and task scheduling can be rapidly completed.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of a task scheduling method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a task scheduling apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a server for implementing task scheduling according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
In order to facilitate understanding of the technical solutions of the present disclosure, specific application scenarios of the present disclosure are described below.
With the development of computer technology, banking services are gradually changed from off-line processing to on-line processing, and some tasks which are required to be processed by a banking service system have weak requirements on real-time performance. For example, in a salary service, the banking system need not return the results of the processing of the salary settlement tasks to the user in real time.
For such tasks, the banking system usually performs polling, that is, the banking system periodically calls up the tasks. The method can make up by using the next task starting after a certain task starting fails. However, since the above method uses a fixed time interval to invoke the task, on one hand, when there is no data that the task needs to process, the polling will cause a waste of the performance of the banking system, and on the other hand, when there is a task that the task needs to process, the fixed time interval will make the banking system unable to respond to the task data in time.
Based on this, the embodiment of the present disclosure provides a task scheduling method. Specifically, the method includes the steps of starting a task according to a current scheduling mode, acquiring first data needing to be processed by the task and processing the first data, setting a target scheduling mode of a next stage of the task according to the data volume of the acquired first data and a preset threshold corresponding to the task, and acquiring second data needing to be processed by the task and processing the second data according to the target scheduling mode.
According to the method, the scheduling mode of the next stage is adjusted according to the size relation between the data volume of the task data acquired at each stage and the preset threshold value, when more task data needing to be processed does not exist, the problem of server performance waste caused by frequently acquiring the task data can be avoided, when more task data needing to be processed exists, the task data can be timely responded by reducing the time interval for acquiring the task data, and task scheduling can be rapidly completed.
Next, a task scheduling method provided by the embodiments of the present disclosure is described in detail with reference to the accompanying drawings.
Referring to a flow diagram of a task scheduling method shown in fig. 1, the method may be executed by a server, and specifically includes the following steps:
s101: the server calls the task according to the current scheduling mode, acquires first data needing to be processed by the task, and processes the first data.
In this disclosed embodiment, the server invokes the task according to the current scheduling mode, and in some possible implementations, the current scheduling mode of the server may be the first scheduling mode. After the server invokes the task, the server may obtain first data that the task needs to process, where it should be noted that the first data satisfies non-uniform discrete distribution, for example, collected system Central Processing Unit (CPU) information and memory data information are not uniformly and discretely distributed, and are not suitable for the embodiment of the present disclosure.
S102: and the server sets a target scheduling mode of the task at the next stage according to the data volume of the first data and a preset threshold corresponding to the task.
In the embodiment of the present disclosure, the server may adjust the target scheduling mode of the next stage according to the comparison result by comparing a size relationship between the data amount of the first data and a preset threshold corresponding to the task. The preset threshold value may be defaulted to 0, or may be set according to an actual task situation.
In some possible implementations, the server may obtain first data that the task needs to process in a first scheduling mode, where the first scheduling mode indicates that the data that the task needs to process is obtained at a first time interval. The first time interval may be set according to an empirical value, for example, the first time interval may be set to 1 second.
Specifically, the preset threshold may be set to 10, in this case, when the data amount of the first data is less than or equal to the preset threshold, for example, the data amount of the acquired first data is 5, the target scheduling mode of the next stage is set to the first scheduling mode, that is, the scheduling mode remains unchanged; when the data volume of the first data is greater than the preset threshold, for example, the data volume of the acquired first data is 20, the target scheduling mode of the next stage is set to be a second scheduling mode, the second scheduling mode indicates that data required to be processed by the task is acquired at a second time interval, and the second time interval is smaller than the first time interval.
It should be noted that, when the task supports concurrence, the server may perform data Processing by using a first number of threads in the first scheduling mode, and may perform data Processing by using a second number of threads in the second scheduling mode, where the first number is smaller than the second number, and the second number is not greater than twice the number of Central Processing Units (CPUs). That is to say, on the basis that the tasks support concurrency, the event bus is adopted to cooperate with more threads of the thread pool to perform data processing in the second scheduling mode, and a faster processing result can be obtained.
S103: and the server acquires second data required to be processed by the task according to the target scheduling mode and processes the second data.
It should be noted that the second data satisfies non-uniform discrete distribution, for example, the collected system Central Processing Unit (CPU) information and memory data information are not uniformly and discretely distributed, and are not suitable for the embodiment of the present disclosure.
In some possible implementations, embodiments of the present disclosure further include: the server pre-configures the attributes of the tasks, wherein the attributes comprise one or more of concurrency support, new tasks to be started after the last task is completed, a first time interval, a second time interval and task names.
The server can configure the attributes of the tasks in advance according to the actual conditions of the tasks. For example, the attributes of a compensation settlement task in a brokering business are configured as: concurrency, no waiting, 20 milliseconds for the first time interval, 2 milliseconds for the second time interval, and compensation settlement for the task name are supported.
The method includes the steps of starting a task according to a current scheduling mode, acquiring first data needing to be processed by the task and processing the first data, setting a target scheduling mode of a next stage of the task according to the data volume of the acquired first data and a preset threshold corresponding to the task, and acquiring second data needing to be processed by the task and processing the second data according to the target scheduling mode. According to the method, the scheduling mode of the next stage is adjusted according to the size relation between the data volume of the task data acquired at each stage and the preset threshold value, when more task data needing to be processed does not exist, the problem of server performance waste caused by frequently acquiring the task data can be avoided, when more task data needing to be processed exists, the task data can be timely responded by reducing the time interval for acquiring the task data, and task scheduling can be rapidly completed.
Based on the method provided by the embodiment of the disclosure, the embodiment of the disclosure also provides a task scheduling device corresponding to the method. The units/modules described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit/module does not in some cases constitute a limitation of the unit/module itself.
Referring to fig. 2, a schematic diagram of a task scheduling apparatus 200 includes:
the scheduling module 201 is configured to start a task according to a current scheduling mode, acquire first data that the task needs to process, and process the first data;
a setting module 202, configured to set a target scheduling mode of a next stage of the task according to the data amount of the first data and a preset threshold corresponding to the task;
the scheduling module 201 is further configured to obtain second data that needs to be processed by the task according to the target scheduling mode, and process the second data.
In some possible implementations, the setting module 202 is specifically configured to:
when the data volume of the first data is smaller than or equal to a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a first scheduling mode, wherein the first scheduling mode indicates that data required to be processed by the task is acquired at a first time interval;
and when the data volume of the first data is larger than a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a second scheduling mode, wherein the second scheduling mode indicates that data required to be processed by the task is acquired according to a second time interval, and the second time interval is smaller than the first time interval.
In some possible implementations, the setting module 202 is specifically configured to:
when the task supports concurrency, the first scheduling mode further indicates: processing data by adopting a first number of threads;
the second scheduling mode further indicates: and processing data by adopting a second number of threads, wherein the first number is smaller than the second number.
In some possible implementations, the apparatus 200 further includes:
and the configuration module is used for configuring the attributes of the tasks in advance, wherein the attributes comprise one or more of whether concurrency is supported, whether a new task is started after the last task is completed, a first time interval, a second time interval and a task name.
The task scheduling apparatus 200 according to the embodiment of the present disclosure may correspond to performing the method described in the embodiment of the present disclosure, and the above and other operations and/or functions of each module/unit of the task scheduling apparatus 200 are respectively for implementing corresponding flows of each method in the embodiment shown in fig. 1, and are not described herein again for brevity.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. Referring to the structural diagram of the server 300 for implementing task scheduling shown in fig. 3, it should be noted that the server shown in fig. 3 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present disclosure.
As shown in fig. 3, the server 300 may include a processing device (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for the operation of the server 300 are also stored. The processing device 301, the ROM 302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the server 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 illustrates a server 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
The present disclosure also provides a computer-readable storage medium, also referred to as a machine-readable medium. In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium carries one or more programs which, when executed by the server, cause the server to: the method comprises the steps of starting a task according to a current scheduling mode, obtaining first data needing to be processed by the task, and processing the first data; setting a target scheduling mode of the task at the next stage according to the data volume of the first data and a preset threshold corresponding to the task; and according to the target scheduling mode, acquiring second data required to be processed by the task, and processing the second data.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means, or may be installed from a storage means. The computer program, when executed by a processing device, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for task scheduling, the method comprising:
the method comprises the steps of starting a task according to a current scheduling mode, obtaining first data needing to be processed by the task, and processing the first data;
setting a target scheduling mode of the task at the next stage according to the data volume of the first data and a preset threshold corresponding to the task;
and according to the target scheduling mode, acquiring second data required to be processed by the task, and processing the second data.
2. The method according to claim 1, wherein the setting of the target scheduling mode of the next stage of the task according to the data amount of the first data and a preset threshold corresponding to the task comprises:
when the data volume of the first data is smaller than or equal to a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a first scheduling mode, wherein the first scheduling mode indicates that data required to be processed by the task is acquired at a first time interval;
and when the data volume of the first data is larger than a preset threshold corresponding to the task, setting a target scheduling mode of the task at the next stage as a second scheduling mode, wherein the second scheduling mode indicates that data required to be processed by the task is acquired according to a second time interval, and the second time interval is smaller than the first time interval.
3. The method of claim 2, wherein when the task supports concurrency, the first scheduling mode further indicates: processing data by adopting a first number of threads;
the second scheduling mode further indicates: and processing data by adopting a second number of threads, wherein the first number is smaller than the second number.
4. The method of claim 3, wherein the second number is no greater than twice a number of Central Processing Units (CPUs).
5. The method of claim 1, further comprising:
and pre-configuring the attributes of the tasks, wherein the attributes comprise one or more of concurrency support, new task starting after the last task is completed, first time interval, second time interval and task name.
6. The method of claim 1, wherein the data sources that the task needs to process are non-uniformly discretely distributed, the data sources including the first data and the second data.
7. A task scheduling apparatus, characterized in that the apparatus comprises:
the scheduling module is used for scheduling the tasks according to the current scheduling mode, acquiring first data to be processed by the tasks and processing the first data;
the setting module is used for setting a target scheduling mode of the next stage of the task according to the data volume of the first data and a preset threshold corresponding to the task;
and the scheduling module is further configured to acquire second data to be processed by the task according to the target scheduling mode, and process the second data.
8. A server, comprising a processor and a memory, the memory having stored therein instructions, the processor executing the instructions to cause the server to perform the method of any of claims 1 to 6.
9. A computer readable storage medium comprising computer readable instructions which, when run on a server, cause the server to perform the method of any one of claims 1 to 6.
10. A computer program product comprising computer readable instructions which, when run on a server, cause the server to perform the method of any one of claims 1 to 6.
CN202210535660.4A 2022-05-17 2022-05-17 Task scheduling method and device Pending CN114880096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210535660.4A CN114880096A (en) 2022-05-17 2022-05-17 Task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210535660.4A CN114880096A (en) 2022-05-17 2022-05-17 Task scheduling method and device

Publications (1)

Publication Number Publication Date
CN114880096A true CN114880096A (en) 2022-08-09

Family

ID=82676292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210535660.4A Pending CN114880096A (en) 2022-05-17 2022-05-17 Task scheduling method and device

Country Status (1)

Country Link
CN (1) CN114880096A (en)

Similar Documents

Publication Publication Date Title
CN109408205B (en) Task scheduling method and device based on hadoop cluster
US9396028B2 (en) Scheduling workloads and making provision decisions of computer resources in a computing environment
CN111191777B (en) Neural network processor and control method thereof
CN110019496B (en) Data reading and writing method and system
CN110851276A (en) Service request processing method, device, server and storage medium
CN115640149A (en) RDMA event management method, device and storage medium
CN112650541B (en) Application program starting acceleration method, system, equipment and storage medium
CN111858040A (en) Resource scheduling method and device
CN113721811B (en) Popup window sending method, popup window sending device, electronic equipment and computer readable medium
US20220413906A1 (en) Method, device, and program product for managing multiple computing tasks based on batch
US9047125B2 (en) Deterministic real time business application processing in a service-oriented architecture
CN114490048A (en) Task execution method and device, electronic equipment and computer storage medium
CN113778581A (en) Page loading method, electronic equipment and storage medium
EP4386554A1 (en) Instruction distribution method and device for multithreaded processor, and storage medium
CN113220342A (en) Centralized configuration method and device, electronic equipment and storage medium
US9690619B2 (en) Thread processing method and thread processing system for setting for each thread priority level of access right to access shared memory
CN106681810A (en) Task docking processing customized management method, device and electronic equipment
CN114880096A (en) Task scheduling method and device
CN116302271A (en) Page display method and device and electronic equipment
CN112667368A (en) Task data processing method and device
CN110764911A (en) Resource scheduling method, device and control system based on order
US20230093004A1 (en) System and method for asynchronous backend processing of expensive command line interface commands
CN115237574A (en) Scheduling method and device of artificial intelligence chip and electronic equipment
CN114968567A (en) Method, apparatus and medium for allocating computing resources of a compute node
CN114265692A (en) Service scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination