CN113535362A - Distributed scheduling system architecture and micro-service workflow scheduling method - Google Patents
Distributed scheduling system architecture and micro-service workflow scheduling method Download PDFInfo
- Publication number
- CN113535362A CN113535362A CN202110841580.7A CN202110841580A CN113535362A CN 113535362 A CN113535362 A CN 113535362A CN 202110841580 A CN202110841580 A CN 202110841580A CN 113535362 A CN113535362 A CN 113535362A
- Authority
- CN
- China
- Prior art keywords
- scheduling
- node
- task
- management
- service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/461—Saving or restoring of program or task context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/465—Distributed object oriented systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a distributed scheduling system architecture and a micro-service workflow scheduling method, belonging to the field of computers. The invention adopts a distributed architecture to design a control center of micro-service scheduling, separates control logic from service logic, separates the service logic from the test result, and adopts an asynchronous task mode, so that the time occupied by an execution thread can be greatly shortened, and the response speed of the micro-service is greatly improved. The distributed locks are adopted to realize the ordered scheduling and fault tolerance of the background micro-service workflow, and the concurrency capability and the high availability capability of the scheduling center can be obviously improved under the condition of the same number of thread pools.
Description
Technical Field
The invention belongs to the field of computers, and particularly relates to a distributed scheduling system architecture and micro-service workflow scheduling method.
Background
The mainstream micro-service system realizes the management and control of micro-service application through a micro-service gateway product. The micro service gateway is responsible for processing interface service call among the micro service modules, and can relate to scheduling work such as safety, routing, proxy, monitoring, log, current limiting and the like, thereby forming a centralized scheduling system architecture. For the centralized architecture, all API interface services are registered in the micro service gateway, which is equivalent to that the micro service gateway performs a layer of encapsulation based on the original service API interface, and then issues the proxy service. Therefore, the call of all the micro service interface services can be intercepted in the micro service gateway. The scheduling capabilities of security, logging, current limiting, routing, etc. implemented by the micro-service gateway are all based on this interception. Meanwhile, the realization of each capability can be configured as each independent plug-in the interception process.
The centralized micro service gateway is used as an API entrance for all services, and is easily subject to performance bottleneck along with the expansion of the service scale. When each user requests to access the background application, as long as interaction between services is involved, routing is performed from the micro-service gateway, and after the services are loaded, a large number of interaction calls between internal services are superposed on the micro-service gateway, so that the problems of overload of the micro-service gateway and slow response of the background services can be caused. Another problem is that once a problem occurs with the microservice gateway, the cluster is endless and the entire cluster crashes.
To solve this problem, some solutions adopt a design of multiple gateway instances with load balancing in an architecture mode to achieve load sharing and high availability, and this mode has a disadvantage that scheduling control is not flexible enough. Some micro service systems also provide decentralized architectures, such as a SericeMesh service gateway, and through implanting an SDK packet with a control function in a service end, background services directly perform point-to-point interaction, and actual service call requests and data streams do not pass through a control center; such disadvantages are the need to design and implant SDK packages for microservices, the need for large workloads, and the unsuitability for complex scheduling of workflow tasks.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is how to provide a distributed scheduling system architecture and a micro-service workflow scheduling method to solve the problem that a centralized micro-service gateway is easy to encounter performance bottleneck.
(II) technical scheme
In order to solve the technical problem, the invention provides a distributed scheduling system architecture, which comprises a micro-service registration center eureka, a scheduling center, an execution node, a scheduling database and a service database, wherein the scheduling center comprises a plurality of scheduling nodes;
the scheduling node and the execution node are distributed and deployed in a micro-service mode; the role and API address information of each node are registered on the eureka of the micro-service center, and are uniformly maintained by the eureka of the micro-service center;
the scheduling node comprises a remote call controller, a callback controller, a management runtime module and a core scheduler; wherein the core scheduler is constructed based on quartz; the management operation is used for realizing various management functions; the core communication between the scheduling node and the execution node comprises remote calling (RMS) and call-back (Cal lback), an execution instruction is sent to the execution node through a remote calling controller, a job running result returned from an executor of the execution node is received through a call-back controller, and a complex job flow sequence can be received from a job chain module of the execution node through a job management component;
the scheduling database is connected with the scheduling center and is used for persistently storing scheduling related data;
the execution node is an execution module embedded in each microservice and comprises an executor, a job chain and a service bean; the executor executes the task and returns the result to the scheduling center through the callback interface; the execution sequence and the dependency relationship of the job chain combined tasks meet the complex job scheduling requirement; the service bean is a carrier for embedding the execution node and the micro service;
the service database is connected with the execution node and used for storing the server data of the persistent micro-service application.
Further, the management functions of the scheduling node comprise job management, monitoring management, log management, configuration management, trigger management and log scheduling, and a Restful interface and web page dynamic presentation are provided.
Further, the data stored in the scheduling database comprises a task sequence, monitoring data, log data and configuration data.
Furthermore, the communication between the scheduling node and the execution node carries out remote call and result callback through an API (application programming interface) of an http protocol.
Furthermore, the scheduling node sends a synchronous or asynchronous execution instruction to the execution node through the remote call controller, the executor supports synchronous and asynchronous modes to execute tasks, and the result is returned to the scheduling center through the callback interface.
The invention also provides a micro-service workflow scheduling method based on the distributed scheduling system architecture, which is characterized by comprising the following steps,
s1, the scheduling node acquires an idle thread from the task scheduling thread pool, and accesses the scheduling database through the new thread to acquire a task; if the task needs to be executed, the process goes to step S2; otherwise, entering a dormant state until the step is restarted after awakening;
s2, the scheduling node acquires the process lock through competition; the process lock is distributed to the optimal node in the distributed dispatching center, namely, the optimal node is elected to be a management node, and the node which does not obtain the lock blocks until the process lock is obtained;
s3, the management node starts the affair, takes out the first task from the task database, judges the instruction type, submits the first task to the task queue, remotely calls the execution node, deletes the task in the task database, closes the affair, and records the log information;
s4, the management node releases the flow lock and then releases the thread resource, and the thread returns to the thread pool for the next task to dispatch.
Furthermore, the scheduling node calls the execution node in a non-blocking mode, and the flow lock and the thread can be released without waiting for the result callback of the execution node.
Further, when the instruction acquired by the management node/execution node is "insufficient resource", the blocked thread is suspended, and the suspended flow with insufficient resource is awakened again for execution.
Further, when the management node fails or the heartbeat is lost due to network jitter, the following management node fault-tolerant flow is executed:
(1) the dispatching center monitors a management node fault event and triggers a fault tolerance mechanism;
(2) available scheduling nodes compete for the fault-tolerant lock, the scheduling node which obtains the fault-tolerant lock becomes a fault-tolerant management node, and the fault-tolerant management node broadcasts a fault-tolerant alarm notification and records log information;
(3) the fault-tolerant management node inquires the task instances of which the calling sources are the original fault nodes, updates the calling sources of the instances to Nul l and generates a new task instruction;
(4) releasing the fault-tolerant lock and completing fault tolerance.
Further, the fault tolerance method also comprises the following steps after the fault tolerance is completed: the thread scheduling is carried out by the scheduling center again, and the new management node takes over according to the monitoring of different states of the newly submitted task; monitoring the state of the task instance of the running task; and judging whether the task which is submitted successfully exists in the task queue, if so, monitoring the state of the task instance, and if not, resubmitting the task instance again.
(III) advantageous effects
The invention provides a distributed scheduling system architecture and a micro-service workflow scheduling method. The distributed locks are adopted to realize the ordered scheduling and fault tolerance of the background micro-service workflow, and the concurrency capability and the high availability capability of the scheduling center can be obviously improved under the condition of the same number of thread pools.
Drawings
FIG. 1 is a diagram of a distributed scheduling system architecture according to the present invention;
FIG. 2 is a flow chart of an embodiment of distributed scheduling of the present invention;
FIG. 3 is a flow chart of distributed scheduling fault tolerance according to the present invention.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention aims to provide a distributed solution for micro-service scheduling management, which is characterized in that a distributed control center is formed by independently stripping control functions in a traditional micro-service gateway, and decentralization and high availability of the control center are realized. In the distributed scheduling process, control logic and service logic are separated, ordered scheduling and fault tolerance of control nodes are guaranteed through distributed lock design, and high flexibility and scheduling of complex workflow tasks are achieved.
The invention adopts a distributed architecture to design a control center of micro-service scheduling, separates control logic from service logic, separates the service logic from the test result, and adopts an asynchronous task mode, so that the time occupied by an execution thread can be greatly shortened, and the response speed of the micro-service is greatly improved. The distributed locks are adopted to realize the ordered scheduling and fault tolerance of the background micro-service workflow, and the concurrency capability and the high availability capability of the scheduling center can be obviously improved under the condition of the same number of thread pools.
The architecture diagram of the distributed scheduling system proposed by the present invention is shown in fig. 1. The system architecture comprises a micro-service registration center eureka, a scheduling center, an execution node, a scheduling database and a service database. The dispatch center includes a plurality of dispatch nodes.
The whole architecture is established on the basis of micro-services, and the scheduling nodes and the execution nodes are distributed and deployed in a micro-service mode. The role and API address information of each node are registered on the micro service center eureka, and are uniformly maintained by the micro service center eureka. Service decentralized scheduling can be achieved, and high availability of the cluster is achieved.
The dispatching center is composed of a plurality of dispatching nodes. The single scheduling node includes a remote call controller, a callback controller, a management runtime, and a core scheduler. The core scheduler is constructed based on quartz, and the quartz native supports clustering, so that complex task triggering and scheduling can be realized;
the management function is realized through a management runtime component which is arranged in a modularized mode, the functions of job management, monitoring management, log management, configuration management, trigger management, log scheduling and the like are supported, and a Restful interface and web page dynamic display are provided.
The core communication between the scheduling node and the execution node comprises remote calling (RMS) and call-back (Cal lback), synchronous or asynchronous execution instructions are sent to the execution node through a remote calling controller, and job running results returned from an executor of the execution node are received through a call-back controller. A complex sequence of job flows may be received from a job chain module of an execution node by a job management component.
The scheduling database is connected with the scheduling center and is used for persistently storing task sequences, monitoring data, log data, configuration data and the like related to scheduling.
The execution node is an execution module embedded in each micro-service and is responsible for receiving scheduling of a scheduling center, and comprises an executor, a job chain and a service bean. The executor supports two modes of synchronous and asynchronous execution of tasks and returns the result to the dispatching center through the callback interface; the job chain can combine the execution sequence and the dependency relationship of tasks to meet the complex job scheduling requirement; the service bean is a carrier for executing the mosaic of nodes and microservices.
The service database is connected with the execution node and used for storing the server data of the persistent micro-service application.
The scheduling node and the execution node are separated, the scheduling node is only responsible for scheduling, the execution node is only responsible for service, the communication between the nodes is mainly used for remote calling and result callback through an API (application program interface) of an http (hyper text transport protocol), and the two parts can be completely decoupled, so that the overall expansibility of the system is enhanced.
The distributed scheduling method proposed by the present invention is shown in fig. 2, and includes the following steps:
and S1, the scheduling node acquires an idle thread from the task scheduling thread pool, and accesses the scheduling database through the new thread to acquire the task. If the task needs to be executed, the process goes to step S2; otherwise, entering a dormant state until the step is restarted after awakening;
and S2, the scheduling node acquires the process lock through competition. The process lock is distributed to the optimal node in the distributed dispatching center, namely, the optimal node is elected to be a management node, and the node which does not acquire the lock blocks until the process lock is acquired.
S3, the management node starts the affair, takes out the first task from the task database, judges the instruction type, and submits the instruction type to the task queue to remotely call the execution node. And then the management node deletes the task in the task database, closes the transaction and records log information.
S4, the management node releases the flow lock and then releases the thread resource, and the thread returns to the thread pool for the next task to dispatch.
In the above scenario, the management node typically initiates a remote call to the execution node asynchronously, and the flow lock and thread may be released without waiting for the execution node result to call back. In the working mode, the scheduling node can call the execution node in a non-blocking mode, performance influence caused by task service logic can be avoided, and system performance is improved.
However, in some stateful application scenarios, the task execution has a limited sequence, and the task execution can only be performed in a synchronous manner, that is, after the execution node completes the task execution through the synchronous executor, and returns the result to the management node, the management node can release the flow lock and the thread resource. If the sub-processes are still nested with each other, problems of insufficient thread circulation waiting and deadlock can be caused. The method for achieving the thread flow comprises the steps that a resource-deficient instruction type is added, when the instruction acquired by a management node/an execution node is 'resource-deficient', the blocked thread is hung, so that a thread pool has a new thread, and the hung flow with insufficient resources can be awakened and executed again.
In a scheduling system, a management node may fail or lose heartbeat due to network jitter. The distributed scheduling system of the present invention can realize high availability of clusters through fault tolerance, and a management node fault tolerance flowchart is shown in fig. 3.
(1) The dispatching center monitors the fault event of the management node and triggers a fault tolerance mechanism.
(2) Available scheduling nodes compete for the fault-tolerant lock, the scheduling node which obtains the fault-tolerant lock becomes a fault-tolerant management node, and the fault-tolerant management node broadcasts a fault-tolerant alarm notification and records log information.
(3) And the fault-tolerant management node inquires the task instances with the calling sources of the original fault nodes, updates the calling sources of the instances to Nul l and generates a new task instruction.
(4) Releasing the fault-tolerant lock and completing fault tolerance.
(5) And after the fault tolerance is completed, the thread scheduling is carried out again by the scheduling center, and the new management node takes over according to different states of the newly submitted task. Monitoring the state of the task instance of the running task; and judging whether the task which is submitted successfully exists in the task queue, if so, monitoring the state of the task instance, and if not, resubmitting the task instance again.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A distributed scheduling system architecture is characterized in that the system architecture comprises a micro-service registration center eureka, a scheduling center, an execution node, a scheduling database and a service database, wherein the scheduling center comprises a plurality of scheduling nodes;
the scheduling node and the execution node are distributed and deployed in a micro-service mode; the role and API address information of each node are registered on the eureka of the micro-service center, and are uniformly maintained by the eureka of the micro-service center;
the scheduling node comprises a remote call controller, a callback controller, a management runtime module and a core scheduler; wherein the core scheduler is constructed based on quartz; the management operation is used for realizing various management functions; the core communication between the scheduling node and the execution node comprises remote calling (RMS) and call-back (Callback), an execution instruction is sent to the execution node through a remote calling controller, a job running result returned from an executor of the execution node is received through a call-back controller, and a complex job flow sequence can be received from a job chain module of the execution node through a job management component;
the scheduling database is connected with the scheduling center and is used for persistently storing scheduling related data;
the execution node is an execution module embedded in each microservice and comprises an executor, a job chain and a service bean; the executor executes the task and returns the result to the scheduling center through the callback interface; the execution sequence and the dependency relationship of the job chain combined tasks meet the complex job scheduling requirement; the service bean is a carrier for embedding the execution node and the micro service;
the service database is connected with the execution node and used for storing the server data of the persistent micro-service application.
2. The distributed scheduling system architecture of claim 1 wherein the management functions of the scheduling node include job management, monitoring management, log management, configuration management, trigger management, and scheduling logs, and provide Restful interfaces and web page dynamic exposure.
3. The distributed scheduling system architecture of claim 1 wherein the scheduling database stores data including task sequences, monitoring data, logging data, and configuration data.
4. The distributed scheduling system architecture of claim 1 wherein the communication between the scheduling node and the executing node is via an API interface of http protocol for remote calls and result callbacks.
5. The distributed scheduling system architecture of claim 1 wherein the scheduling node sends synchronous or asynchronous execution instructions to the execution node via the remote call controller, and the executor supports both synchronous and asynchronous execution of tasks and returns results to the scheduling center via the callback interface.
6. A method for scheduling micro-service workflow based on the distributed scheduling system architecture of any of claims 1-5, characterized in that the method comprises the following steps,
s1, the scheduling node acquires an idle thread from the task scheduling thread pool, and accesses the scheduling database through the new thread to acquire a task; if the task needs to be executed, the process goes to step S2; otherwise, entering a dormant state until the step is restarted after awakening;
s2, the scheduling node acquires the process lock through competition; the process lock is distributed to the optimal node in the distributed dispatching center, namely, the optimal node is elected to be a management node, and the node which does not obtain the lock blocks until the process lock is obtained;
s3, the management node starts the affair, takes out the first task from the task database, judges the instruction type, submits the first task to the task queue, remotely calls the execution node, deletes the task in the task database, closes the affair, and records the log information;
s4, the management node releases the flow lock and then releases the thread resource, and the thread returns to the thread pool for the next task to dispatch.
7. The distributed scheduling system architecture of claim 6 wherein the scheduling node invokes the execution node in a non-blocking manner, releasing flow locks and threads without waiting for execution node result callbacks.
8. The distributed scheduling system architecture of claim 6 wherein when the instruction retrieved by the management node/execution node is "out of resources", the blocked thread is suspended and the flow that is suspended out of resources is allowed to resume execution.
9. The distributed scheduling system architecture of claim 6 wherein when a management node fails or is heartbeats due to network jitter, the following management node fault tolerance procedure is performed:
(1) the dispatching center monitors a management node fault event and triggers a fault tolerance mechanism;
(2) available scheduling nodes compete for the fault-tolerant lock, the scheduling node which obtains the fault-tolerant lock becomes a fault-tolerant management node, and the fault-tolerant management node broadcasts a fault-tolerant alarm notification and records log information;
(3) the fault-tolerant management node inquires the task instances of which the calling sources are the original fault nodes, updates the calling sources of the instances to Null and generates a new task instruction;
(4) releasing the fault-tolerant lock and completing fault tolerance.
10. The distributed scheduling system architecture of claim 9 further comprising, after fault tolerance is complete, the steps of: the thread scheduling is carried out by the scheduling center again, and the new management node takes over according to the monitoring of different states of the newly submitted task; monitoring the state of the task instance of the running task; and judging whether the task which is submitted successfully exists in the task queue, if so, monitoring the state of the task instance, and if not, resubmitting the task instance again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110841580.7A CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110841580.7A CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113535362A true CN113535362A (en) | 2021-10-22 |
CN113535362B CN113535362B (en) | 2023-07-28 |
Family
ID=78120719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110841580.7A Active CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113535362B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114625520A (en) * | 2022-05-16 | 2022-06-14 | 中博信息技术研究院有限公司 | Distributed task scheduling gateway scheduling method based on current limiting |
CN114691233A (en) * | 2022-03-16 | 2022-07-01 | 中国电子科技集团公司第五十四研究所 | Remote sensing data processing plug-in distributed scheduling method based on workflow engine |
CN115357403A (en) * | 2022-10-20 | 2022-11-18 | 智己汽车科技有限公司 | Micro-service system for task scheduling and task scheduling method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160077889A1 (en) * | 2014-09-16 | 2016-03-17 | Oracle International Corporation | System and method for supporting waiting thread notification offloading in a distributed data grid |
CN106445648A (en) * | 2016-10-21 | 2017-02-22 | 天津海量信息技术股份有限公司 | System for achieving multi-worker coordination based on redis |
CN111400053A (en) * | 2020-03-17 | 2020-07-10 | 畅捷通信息技术股份有限公司 | Database access system, method, apparatus and computer-readable storage medium |
CN111752696A (en) * | 2020-06-25 | 2020-10-09 | 武汉众邦银行股份有限公司 | RPC and thread lock based distributed timing task scheduling method |
CN111897646A (en) * | 2020-08-13 | 2020-11-06 | 银联商务股份有限公司 | Asynchronous distributed lock implementation method and device, storage medium and electronic equipment |
CN112148436A (en) * | 2020-09-23 | 2020-12-29 | 厦门市易联众易惠科技有限公司 | Decentralized TCC (transmission control protocol) transaction management method, device, equipment and system |
CN112162841A (en) * | 2020-09-30 | 2021-01-01 | 重庆长安汽车股份有限公司 | Distributed scheduling system, method and storage medium for big data processing |
CN112486695A (en) * | 2020-12-07 | 2021-03-12 | 浪潮云信息技术股份公司 | Distributed lock implementation method under high concurrency service |
CN112527476A (en) * | 2019-09-19 | 2021-03-19 | 华为技术有限公司 | Resource scheduling method and electronic equipment |
CN113157447A (en) * | 2021-04-13 | 2021-07-23 | 中南大学 | RPC load balancing method based on intelligent network card |
-
2021
- 2021-07-26 CN CN202110841580.7A patent/CN113535362B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160077889A1 (en) * | 2014-09-16 | 2016-03-17 | Oracle International Corporation | System and method for supporting waiting thread notification offloading in a distributed data grid |
CN106445648A (en) * | 2016-10-21 | 2017-02-22 | 天津海量信息技术股份有限公司 | System for achieving multi-worker coordination based on redis |
CN112527476A (en) * | 2019-09-19 | 2021-03-19 | 华为技术有限公司 | Resource scheduling method and electronic equipment |
CN111400053A (en) * | 2020-03-17 | 2020-07-10 | 畅捷通信息技术股份有限公司 | Database access system, method, apparatus and computer-readable storage medium |
CN111752696A (en) * | 2020-06-25 | 2020-10-09 | 武汉众邦银行股份有限公司 | RPC and thread lock based distributed timing task scheduling method |
CN111897646A (en) * | 2020-08-13 | 2020-11-06 | 银联商务股份有限公司 | Asynchronous distributed lock implementation method and device, storage medium and electronic equipment |
CN112148436A (en) * | 2020-09-23 | 2020-12-29 | 厦门市易联众易惠科技有限公司 | Decentralized TCC (transmission control protocol) transaction management method, device, equipment and system |
CN112162841A (en) * | 2020-09-30 | 2021-01-01 | 重庆长安汽车股份有限公司 | Distributed scheduling system, method and storage medium for big data processing |
CN112486695A (en) * | 2020-12-07 | 2021-03-12 | 浪潮云信息技术股份公司 | Distributed lock implementation method under high concurrency service |
CN113157447A (en) * | 2021-04-13 | 2021-07-23 | 中南大学 | RPC load balancing method based on intelligent network card |
Non-Patent Citations (3)
Title |
---|
YUE-JIAO GONG: ""Distributed evolutionary algorithms and their models: A survey of the state-of-the-art"", 《APPLIED SOFT COMPUTING》, vol. 34, pages 286 - 300 * |
冯涛: ""网络计算机联锁仿真系统研究"", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, no. 2014, pages 033 - 80 * |
子烁爱学习: ""微服务实践:分布式锁"", pages 1 - 7, Retrieved from the Internet <URL:《https://www.cnblogs.com/MrSaver/p/11567997.html》> * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114691233A (en) * | 2022-03-16 | 2022-07-01 | 中国电子科技集团公司第五十四研究所 | Remote sensing data processing plug-in distributed scheduling method based on workflow engine |
CN114625520A (en) * | 2022-05-16 | 2022-06-14 | 中博信息技术研究院有限公司 | Distributed task scheduling gateway scheduling method based on current limiting |
CN114625520B (en) * | 2022-05-16 | 2022-08-30 | 中博信息技术研究院有限公司 | Distributed task scheduling gateway scheduling method based on current limiting |
CN115357403A (en) * | 2022-10-20 | 2022-11-18 | 智己汽车科技有限公司 | Micro-service system for task scheduling and task scheduling method |
Also Published As
Publication number | Publication date |
---|---|
CN113535362B (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113535362B (en) | Distributed scheduling system architecture and micro-service workflow scheduling method | |
CN100530107C (en) | Single process contents server device and method based on IO event notification mechanism | |
US9553944B2 (en) | Application server platform for telecom-based applications using an actor container | |
CN106850829B (en) | A kind of micro services design method based on non-blocking communication | |
CN105045658B (en) | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded | |
US6167423A (en) | Concurrency control of state machines in a computer system using cliques | |
US20050149908A1 (en) | Graphical development of fully executable transactional workflow applications with adaptive high-performance capacity | |
US20070050484A1 (en) | Enterprise application server system and method | |
CN111506412A (en) | Distributed asynchronous task construction and scheduling system and method based on Airflow | |
US9098359B2 (en) | Durable execution of long running applications | |
CN108804238B (en) | Soft bus communication method based on remote procedure call | |
WO2011137672A1 (en) | Method and device for task execution based on database | |
US20130086154A1 (en) | System and method for providing asynchrony in web services | |
CN101741904B (en) | Method for building distributed space computation service node and gateway device | |
CN103500119B (en) | A kind of method for allocating tasks based on pre-scheduling | |
JPH0563821B2 (en) | ||
CN108073414B (en) | Implementation method for merging multithreading concurrent requests and submitting and distributing results in batches based on Jedis | |
US10198271B2 (en) | System and method for booting application servers in parallel | |
JP2005267118A (en) | Interprocessor communication system and program in parallel processing system using os for single processor | |
US8380788B2 (en) | System and method for providing user context support in a native transaction platform | |
Bykov et al. | Orleans: A framework for cloud computing | |
CN111736809A (en) | Distributed robot cluster network management framework and implementation method thereof | |
Frantz et al. | An efficient orchestration engine for the cloud | |
CN100535864C (en) | Method of invalid time over message under system process scheduling | |
US20100122261A1 (en) | Application level placement scheduler in a multiprocessor computing environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |