CN108062243B - Execution plan generation method, task execution method and device - Google Patents

Execution plan generation method, task execution method and device Download PDF

Info

Publication number
CN108062243B
CN108062243B CN201610980192.6A CN201610980192A CN108062243B CN 108062243 B CN108062243 B CN 108062243B CN 201610980192 A CN201610980192 A CN 201610980192A CN 108062243 B CN108062243 B CN 108062243B
Authority
CN
China
Prior art keywords
task
node
computing node
execution plan
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610980192.6A
Other languages
Chinese (zh)
Other versions
CN108062243A (en
Inventor
周明耀
浦世亮
周胜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610980192.6A priority Critical patent/CN108062243B/en
Publication of CN108062243A publication Critical patent/CN108062243A/en
Application granted granted Critical
Publication of CN108062243B publication Critical patent/CN108062243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The application discloses a method for generating an execution plan, which comprises the following steps: determining attribute information of the task; determining a computing node matched with the attribute information according to the attribute information, and using the computing node as a first computing node for executing the task; generating a first execution plan of the task according to the identification of the first computing node; the first execution plan includes an identification of the first compute node. The application also discloses a generating device of the execution plan, and a task execution method and device.

Description

Execution plan generation method, task execution method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for generating an execution plan, a method and an apparatus for executing a task.
Background
In the field of computer technology, a computing device (such as a server or a terminal) performs tasks according to a set execution plan. Particularly, in Cloud service (Cloud service) for executing tasks in a multi-node cooperation manner, in order to ensure smooth execution of the tasks, an execution plan needs to be set before the tasks are executed.
Among them, a task is a basic work element that can be completed by a computing device, and often includes one or more instructions that can be processed by a program. An execution plan is a scheme for specifying how to execute a task. The execution plan generally includes an identification of the computing nodes to be used to execute the task. The computing node may be a certain computing device, a cluster of computing devices, or an application program running on a computing device.
In the related art, in order to adopt a matching execution plan for different tasks, execution plans corresponding to different task types are generally set and solidified. By solidified, it is meant that once the execution plan is set, the execution plan is generally unchanged unless the programmer resets the execution plan.
The use of a solidified execution plan results in less flexibility in always performing tasks according to the solidified execution plan.
Disclosure of Invention
The embodiment of the application provides a method for generating an execution plan, which is used for solving the problem that tasks cannot be flexibly executed due to the adoption of a mode of solidifying the execution plan in the related art.
The embodiment of the application further provides a device for generating the execution plan, which is used for solving the problem that tasks cannot be executed flexibly due to the adoption of a mode of solidifying the execution plan in the related art.
The embodiment of the application also provides a task execution method and a task execution device.
The embodiment of the application adopts the following technical scheme:
a method of generating an execution plan, comprising:
determining attribute information of the task;
determining a computing node matched with the attribute information according to the attribute information, and using the computing node as a first computing node for executing the task;
generating a first execution plan of the task according to the identification of the first computing node; the first execution plan includes an identification of the first compute node.
Optionally, determining, according to the attribute information, a first computing node matched with the attribute information as a first computing node for executing the task, where the determining includes:
and determining the first computing node which is matched with the attribute information and the resource occupation condition of the first computing node according to the attribute information and the resource occupation condition of the first computing node, wherein the resource occupation condition meets the preset resource occupation requirement, and using the first computing node as a first computing node for executing the task.
Optionally, after generating the first execution plan, the method further includes:
and sending the first execution plan and the task to a first computing node executing the task, so that the first computing node executing the task executes the task according to the first execution plan.
Optionally, after sending the first execution plan and the task to a first computing node executing the task, the method further includes:
receiving a node fault notification message sent by a first computing node executing the task;
generating a second execution plan according to the attribute information based on the node fault notification message;
the second execution plan includes an identification of a second computing node that executes the task.
Optionally, the first execution plan further includes at least one of the following information:
a type of the task;
the type of service required to complete the task;
the order in which each first compute node provides service;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently to execute the task.
Optionally, the method is applied to a distributed computing system; then
The method further comprises the following steps:
after an event that a computing node is switched to an abnormal working state occurs in the distributed computing system, updating stored registration information of the computing node in the distributed computing system; the registration information comprises the identification of each computing node in a normal working state in the distributed computing system;
synchronizing the updated registration information to computing nodes in the distributed computing system.
A task execution method applied to a computing node in a distributed computing system, the method comprising:
receiving a first execution plan and a task; the first execution plan including an identification of the first compute node; the first computing node comprises a computing node which is determined according to the attribute information and is matched with the attribute information;
and executing the task according to the first execution plan.
Optionally, according to the first execution plan, executing the task includes:
judging whether the local has the capability of executing the task according to the first execution plan or not based on the service type provided by the local;
and when the judgment result is yes, executing the task according to the first execution plan.
Optionally, after the task is executed, the method further includes:
judging whether a receiver node which is used for receiving the service result and is in a normal working state exists or not; the service result is an execution result obtained after the task is executed according to the first execution plan locally;
and if judging that no receiver node which is used for receiving the service result and is in a normal working state exists, sending a node fault notification message to a scheduling server so that the scheduling server generates a second execution plan according to the attribute information of the task.
Optionally, the determining whether there is a receiver node in a normal working state for receiving the service result includes:
determining, by parsing the first execution plan, an identification of a recipient node that is expected to receive the service outcome;
judging whether registration information of the receiver node expected to receive the service result is stored locally; if the registration information of the receiver node expected to receive the service result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists;
the locally stored registration information includes locally stored information of the computing nodes in a normal working state.
Optionally, the method is applied to a computing node in a distributed computing system; then
The method further comprises the following steps:
and when the registration information of the computing nodes in the distributed computing system changes, the changed registration information is obtained through the scheduling server, and the locally stored registration information is updated according to the changed registration information.
Optionally, after the task is executed, the method further includes:
sending a service result, the first execution plan and the task to a receiver node;
and the service result is an execution result obtained after the task is executed according to the first execution plan locally.
Optionally, the first computing node specifically includes: the computing node is determined according to the attribute information and the resource occupation condition of the computing node, is matched with the attribute information, and meets the preset resource occupation requirement; then
The method further comprises the following steps:
and sending the local resource use condition to a scheduling server in the distributed system.
An execution plan generation apparatus comprising:
an information determination unit for determining attribute information of a task;
the node determining unit is used for determining a computing node matched with the attribute information according to the attribute information and used as a first computing node for executing the task;
the plan generating unit is used for generating a first execution plan of the task according to the identification of the first computing node; the first execution plan includes an identification of the first compute node.
Optionally, the node determining unit is configured to:
and determining the first computing node which is matched with the attribute information and the resource occupation condition of the first computing node according to the attribute information and the resource occupation condition of the first computing node, wherein the resource occupation condition meets the preset resource occupation requirement, and using the first computing node as a first computing node for executing the task.
Optionally, the apparatus further comprises:
and the sending unit is used for sending the first execution plan and the task to a first computing node executing the task after the plan generating unit generates the first execution plan, so that the first computing node executing the task executes the task according to the first execution plan.
Optionally, the apparatus further comprises:
a message receiving unit, configured to receive a node fault notification message sent by a first computing node executing the task after the sending unit sends the first execution plan and the task to the first computing node executing the task;
the plan generating unit is further configured to generate a second execution plan according to the attribute information based on the node fault notification message received by the message receiving unit;
the second execution plan includes an identification of a second computing node that executes the task.
Optionally, the first execution plan further includes at least one of the following information:
a type of the task;
the type of service required to complete the task;
the order in which each first compute node provides service;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently to execute the task.
Optionally, the apparatus is applied to a distributed computing system; then
The device further comprises:
the synchronization unit is used for updating locally stored registration information of the computing nodes in the distributed computing system after an event that the computing nodes are switched to an abnormal working state occurs in the distributed computing system; the registration information comprises the identification of each computing node in a normal working state in the distributed computing system; synchronizing the updated registration information to computing nodes in the distributed computing system.
A task execution device comprising:
a receiving unit configured to receive a first execution plan and a task; the first execution plan including an identification of the first compute node; the first computing node comprises a computing node which is determined according to the attribute information and is matched with the attribute information;
and the execution unit is used for executing the task according to the first execution plan.
Optionally, the execution unit is configured to:
judging whether the local has the capability of executing the task according to the first execution plan or not based on the service type provided by the local;
and when the judgment result is yes, executing the task according to the first execution plan.
Optionally, the apparatus further comprises:
the message sending unit is used for judging whether a receiver node which is used for receiving the service result and is in a normal working state exists after the execution unit executes the task; the service result is an execution result obtained after the task is executed according to the first execution plan locally;
and if judging that no receiver node which is used for receiving the service result and is in a normal working state exists, sending a node fault notification message to a scheduling server so that the scheduling server generates a second execution plan according to the attribute information of the task.
Optionally, the message sending unit is configured to:
determining, by parsing the first execution plan, an identification of a recipient node that is expected to receive the service outcome;
judging whether registration information of the receiver node expected to receive the service result is stored locally; if the registration information of the receiver node expected to receive the service result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists;
the locally stored registration information is information including locally stored computing nodes in a normal working state.
Optionally, the apparatus is applied to a computing node in a distributed computing system; then
The device further comprises:
and the updating unit is used for acquiring the changed registration information through the scheduling server when the registration information of the computing nodes in the distributed computing system changes, and updating the locally stored registration information according to the changed registration information.
Optionally, the apparatus further comprises:
the plan sending unit is used for sending a service result, the first execution plan and the task to a receiver node after the execution unit executes the task;
and the service result is an execution result obtained after the task is executed according to the first execution plan locally.
Optionally, the apparatus is applied to a computing node in a distributed computing system; the first computing node specifically includes: the computing node is determined according to the attribute information and the resource occupation condition of the computing node, is matched with the attribute information, and meets the preset resource occupation requirement; then
The device further comprises:
and the resource condition sending unit is used for sending the local resource use condition to the scheduling server in the distributed system.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
because the computing node matched with the attribute information can be determined according to the attribute information of the task to serve as the first computing node for executing the task, and the first execution plan of the task is generated according to the identification of the first computing node, a mode for generating the execution plan according to the attribute information of the task is provided, and the mode does not require the execution plan to be solidified, so that the problem that the task cannot be flexibly executed due to the adoption of the mode for solidifying the execution plan in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic implementation flow chart of a method for generating an execution plan according to an embodiment of the present application;
FIG. 1b is a schematic diagram of a distributed computing system provided by embodiments of the present application;
FIG. 1c is a schematic diagram illustrating three computing nodes cooperatively performing tasks according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a specific implementation of a task execution method according to an embodiment of the present application;
fig. 3a is a scene diagram of the embodiment 2 of the present application;
FIG. 3b is a diagram of several different basic aspects that may occur in an execution chain;
FIG. 3c is a schematic diagram of how tasks are executed according to an execution plan after the execution plan is generated;
fig. 4 is a schematic structural diagram of an execution plan generation apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a task execution device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Example 1
In order to solve the problem that tasks cannot be flexibly executed due to the fact that a solidified execution plan mode is adopted in the related art, the embodiment of the application provides a method for generating an execution plan.
An execution subject of the execution plan generation method provided by the embodiment of the present application may be a certain computing device, for example, a computing device in a distributed computing system for implementing a cloud service, and the like. Taking the execution subject as an example of a certain computing device of the distributed computing system, in the embodiment of the present application, the computing device can play a role in generating an execution plan, managing registration information of other computing devices in the distributed computing system, and the like, and may be referred to as a scheduling server. The computing node may be a computing device, a computing device cluster, or an application program running on a computing device.
The execution subject is not limited to the present application, and for convenience of description, the execution subject is a scheduling server in a distributed computing system in the embodiments of the present application.
The specific implementation flow chart of the method is shown in fig. 1a, and comprises the following steps:
step 11, the scheduling server determines the attribute information of the task;
the task referred to herein is a task to be executed.
Tasks may be performed by a Client (Client) requesting a distributed computing system.
Referring to fig. 1b, fig. 1b is a schematic diagram illustrating a distributed computing system according to an embodiment of the present application. The system comprises a scheduling server and a plurality of computing nodes which are connected with the scheduling server. When a client needs to execute a certain task, the client can send the task to the scheduling server to request the distributed computing system to execute the task. Tasks, which may also be referred to as task execution requests, are basic elements of work that can be performed by a computing device, and often include one or more instructions that can be processed by a program. The task serves as a trigger condition, and the computing node receiving the task can be triggered to execute corresponding operation.
The task may include an instruction and attribute information of the task.
If the task is executed for the purpose of completing processing of specified data (such as image data or audio data), the client may transmit the specified data when transmitting the task.
When the task includes the attribute information of the task, the scheduling server receives the task and analyzes the task to determine the attribute information of the task.
Tasks tend to be of different types, expected execution times or priorities, etc. Different types of tasks, such as video encoding tasks, picture recognition tasks, audio search tasks, etc.; tasks with different expected execution moments, such as a task of "12 am execution", "9 pm execution", and the like; tasks with different priorities, such as "high priority" tasks and "low priority" tasks, etc.
In the embodiment of the present application, the attribute information may include, but is not limited to, at least one of attribute information such as a type, an expected execution time, and a priority.
Step 12, the scheduling server determines a computing node matched with the attribute information of the task according to the attribute information of the task, and the computing node is used as a computing node for executing the task;
following the example shown in FIG. 1b, the dispatch server, upon receiving a task, determines which computing nodes to execute the task.
Since the attribute information of the task generally has a decisive influence on what kind of computing nodes should be used to execute the task, for example, a video coding task can be generally executed by using computing nodes capable of providing video coding service; the task of "executing at 12 pm" can be executed by adopting a computing node which is relatively idle at 12 pm; the task with "high priority" can be executed by using a computing node with relatively good performance — therefore, in the embodiment of the present application, the computing node matched with the attribute information is determined as the computing node for executing the task according to the attribute information of the task.
In the following, taking at least one of attribute information including a type of a task, an expected execution time of the task, and a priority of the task as an example, how to determine a computing node executing the task according to the attribute information of the task is described in detail:
1. when the attribute information of the task is the type of the task, the computing node matched with the type of the task comprises: the provided service can be matched with the type of the task. For example, a video encoding task may be generally performed by a computing node capable of providing video encoding services, that is, a type of task of video encoding is matched with a computing node capable of providing video encoding services. Of course, if it is assumed that the video encoding task requires two services, namely "video pre-processing" and "video encoding the pre-processed video", then the computing nodes respectively providing the two services are both the computing nodes matched with the type of video encoding task.
In this embodiment, in order to determine a computing node matching with a type of a task, the scheduling server may obtain, when the computing node registers in the scheduling server (or at another time), an identifier of the computing node, an IP (Internet Protocol) address of the computing node, and information of a service (generally, a service type identifier) that the computing node can provide, where the identifier is reported by the computing node.
The identifier of the computing node may be a symbol preset in the computing node and used for uniquely identifying the computing node, for example, the identifier of the computing node may be set in the computing node by a program setting person so that the computing node acquires the identifier and sends the identifier to the scheduling server; the services that can be provided by a computing node refer to operations that the computing node can perform. For example, if the computing node is capable of performing a face recognition operation, the "performing a face recognition operation" is a service that the computing node is capable of providing, and the service may be represented by a unique identifier (e.g., a service type identifier); for another example, if the computing node is capable of performing a video encoding operation, the "performing a video encoding operation" is a service that the computing node is capable of providing, and the service may also be represented by a unique identifier (e.g., a service type identifier). The service type identifiers of the different types of services may be preset in the computing node, for example, the service type identifiers may be specifically set in the computing node by a programmer, so that the computing node acquires the identifiers and sends the identifiers to the scheduling server.
For the scheduling server, based on the data sent by the computing nodes and the mapping relationship (first mapping relationship) between different "task type identifiers" and "service type identifiers" for realizing services required by the tasks, which is pre-established by the scheduling server as shown in table 1, the server may further establish a mapping relationship (second mapping relationship) between the "service type identifiers" and respective identifiers of the computing nodes capable of providing corresponding services as shown in table 1, and establish a mapping relationship (third mapping relationship) between the respective identifiers of the computing nodes and the IP addresses.
The mapping relationship between the task type identifier and the service type identifier may be preset in the scheduling server. For example, it may be set in the scheduling server by a programmer.
Table 1:
Figure BDA0001147770910000111
Figure BDA0001147770910000121
based on the mapping relationship shown in table 1, after acquiring the task type identifier of the task requested to be executed by the client, the scheduling server may determine, according to the task type identifier, a service type identifier corresponding to the type of the task by querying the first mapping relationship; further, by querying the second mapping relationship, the identifier of the computing node corresponding to the determined service type identifier may be determined. There may be more than one compute node in a distributed computing system that provides the same service. As shown in table 1, the computing node corresponding to the service type "a", i.e. the computing node providing the service of type "a", includes: 00. 01, 02 and 03. Then, after determining the four computing nodes by querying the second mapping relationship, the scheduling server may select at least one computing node from the four computing nodes as a computing node for executing the task.
In the embodiment of the application, the scheduling server may select the computing node based on a load balancing principle according to the current resource occupation situation of the four computing nodes. The resource occupation may be, for example, CPU occupancy and/or memory usage. If the resource occupation situation is the CPU occupancy, along with the above example, assuming that the current CPU occupancies of the four computing nodes 00, 01, 02, and 03 are respectively 50%, 60%, 70%, and 80%, then the scheduling server may select the computing node 00 with the lowest CPU occupancy as the computing node for executing the task.
The resource occupation condition of the computing node according to which the scheduling server selects the computing node may be reported to the scheduling server by the computing node.
2. When the attribute information of the task is the expected execution time of the task, the computing node matched with the type of the task comprises: a compute node that is relatively idle at the expected execution time so that sufficient computing resources can be serviced. For example, assuming that each computing node of the distributed computing system can provide the same service, after determining the expected execution time of the task (e.g., "12 am execution"), the scheduling server may determine a relatively idle computing node at 12 am as the computing node executing the task according to the information of the idle period of the computing node, which is sent by the computing node when registering with the scheduling server (or at other occasions). The determined computing node executing the task can execute the task when the expected execution time of the task comes.
In the embodiment of the present application, in order to facilitate querying information of idle periods in which the computing nodes are located, the scheduling server may also establish a mapping relationship as shown in table 1. For the process of establishing the mapping relationship, please refer to the above, which is not described herein again.
3. When the attribute information of the task is the priority of the task, the computing node matched with the type of the task comprises: the performance of the computing node is relatively good. For example, assuming that each computing node of the distributed computing system can provide the same service, and assuming that the priority of the task sent by the client to the scheduling server is "high priority", the scheduling server may determine the computing node with the best performance as the computing node executing the task according to the hardware configuration information of the computing node sent by the computing node when registering with the scheduling server (or at other times). Generally, the higher the hardware configuration, the better the performance; the lower the hardware configuration, the worse the performance.
In the embodiment of the present application, in order to facilitate querying the performance of the computing node, the scheduling server may also establish a mapping relationship as shown in table 1. For the process of establishing the mapping relationship, please refer to the above, which is not described herein again.
4. When the attribute information of the task includes at least two attribute information of the type, the expected execution time and the priority of the task, the scheduling server may determine, from the computing nodes in the distributed computing system, a computing node that matches both of the at least two attribute information as a computing node for executing the task.
No matter what the attribute information is used as the basis for determining the computing nodes executing the task, in the embodiment of the present application, the scheduling server may select the computing nodes as the computing nodes executing the task based on a load balancing principle according to the current resource occupation condition of the computing nodes capable of executing the task, so as to avoid that the stability of the distributed computing system is affected by the overload of individual computing nodes.
After determining the computing node matching the attribute information of the task as the computing node for executing the task, the scheduling server further performs step 13 described below. For convenience of description, the computing node determined to match the attribute information of the task is hereinafter referred to as a first computing node.
Step 13, the scheduling server generates a task execution plan (hereinafter referred to as a first execution plan) according to the identifier of the first computing node.
In the embodiment of the present application, the generated first execution plan may be in the form of a character string. In order for the first computing node to execute tasks according to the first execution plan, the first execution plan is subsequently communicated between the first computing nodes.
In the following, it is described in cases which information the first execution plan may contain.
1. In some cases, only one first computing node is determined, and the first computing node provides only a certain service. The first computing node may complete the task by providing the service. Then, only the identity of the first compute node may be included in the first execution plan.
The identity of the first computing node, as used herein, is generally an identity that characterizes a data receiving address (e.g., an IP address) of the first computing node. Specifically, it may be the IP address of the first computing node itself, or it may be a unique identifier mapped with the IP address of the first computing node, as shown in table 1.
When the first execution plan only contains the identifier of one first computing node, the scheduling server can determine the identifier of the first computing node contained in the first execution plan by analyzing the first execution plan, and further determine the IP address of the first computing node according to the identifier; and then, the scheduling server sends the task to the first computing node to be executed according to the IP address.
When the task is executed for the purpose of completing processing of specified data (such as image data or audio data), the scheduling server may also transmit the specified data to the first computing node.
2. In some cases, there are multiple first computing nodes identified. Then, since a service result obtained after a certain first computing node provides a service may be used as an input of another first computing node, the first execution plan may include, in addition to the identifier of each first computing node, an order in which each first computing node provides the service. The order may be characterized by, for example, an order of arrangement of the identifiers of the first computing nodes included in the first computing node identifier sequence. The first calculation node identification sequence here means a sequence in which the identifications of the first calculation nodes are arranged. The order of the identities of the first compute nodes is consistent with the order in which the first compute nodes provide services. For example, the first computing node identification sequence {00,11,12}, which indicates that the first computing node provides services in the following order: is first provided by 00; and then provided by 11; and finally 12. The first computing node may determine such an order by parsing the first execution plan.
Based on such a first execution plan, the first computing node may determine a next computing node providing service according to the order, and send the service result, the first execution plan, and the task to the next computing node providing service according to an identifier of the next computing node providing service, so that the next computing node providing service executes the task according to the service result and the first execution plan.
Taking the first computing node providing the service as an example, after obtaining the service result obtained by the computing node through providing the service, the computing node may send the server result, the first execution plan and the task to the next computing node providing the service, so that the next computing node providing the service executes the task according to the service result and the first execution plan. When the task is executed for completing the processing of the specified data (such as image data or audio data), the scheduling server may further transmit the specified data to the first computing node. The service result obtained by the first computing node may be a processing result obtained by performing some processing on the specified data.
For such a generation manner of the first execution plan, after determining the first computing node that executes the task, the scheduling server may determine an order in which the first computing node provides the service according to an order in which the service required by the task is provided; a first execution plan is generated based on the order in which the first compute nodes provide services and the identity of the first compute nodes.
The scheduling server may pre-store a mapping relationship (hereinafter referred to as a fourth mapping relationship) between types of different tasks and a sequence of services required by the different tasks. After the scheduling server receives the task requested to be executed by the client, on one hand, the scheduling server can determine the identifier of the first computing node by inquiring the mapping relation shown in table 1 according to the task type identifier of the task; on the other hand, according to the task type identifier of the task, by querying the mapping relationship shown in table 1, the order of providing services mapped with the task type identifier is determined, so that the order of providing services by the first computing node is determined.
If the completion of some tasks does not require that the services must be provided in a certain order, the order in which the services are provided by the first compute node may not be included in the first execution plan.
3. In some cases, a plurality of first computing nodes are determined, and at least some of the different first computing nodes may provide a plurality of services. Then, in order to enable the first computing nodes capable of providing multiple services to determine what services should be provided when executing the task, the first execution plan may include, in addition to the identification of each first computing node, an identification of the type of service required to complete the task.
Based on such a first execution plan, after receiving the first execution plan, the first computing node may provide a corresponding service according to the service type identifier of the service required to complete the task, which is included in the first execution plan. In order to facilitate the first computing node to accurately determine what type of service it should provide, in the first execution plan, a mapping relationship between the service type identifier and an identifier of the first computing node providing the corresponding type of service may be established.
For such a manner of generating the first execution plan, taking each mapping relationship shown in table 1 as an example, after the scheduling server determines the first computing node executing the task by querying table 1 according to the type of the task, the scheduling server may generate the first execution plan including the identifier of the first computing node and the service type identifier of the service required to complete the task according to the identifier of the first computing node and the service type identifier corresponding to the task type identifier.
4. In some cases, the first execution plan may include, in addition to the identification of the first computing node, at least one of the following information:
the type of task;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently about to perform a task.
The type of the task is used for enabling the first computing node to know what type of task is currently executed. In one embodiment, the first computing node may be capable of providing services having the same service type identification for different types of tasks.
For example, the first computing node can provide a service of "filtering noise data" with the same service type identification for the "video processing task" and the "audio processing task", however, the source code of the service of "filtering noise data" corresponding to the "video processing task" is often different from the source code of the service of "filtering noise data" corresponding to the "audio processing task". In such a scenario, if the first computing node pre-stores a mapping relationship between different task types (e.g., task type identifiers) and source codes of corresponding services, the first computing node may determine, according to the task type identifiers included in the first execution plan, what type of service should be provided according to the first execution plan, that is, what source codes should be run.
For another example, the role of the type of task may be equivalent to a service type identifier, so that the first computing node can determine what type of service should be provided according to the type of task (e.g., task type identifier) included in the first execution plan.
And the receiver information of the task completion result corresponding to the task is used for enabling the last first computing node providing the service according to the execution plan to know to which receiver the task completion result should be sent after the task is completed. The receiver information may be, for example, an IP address.
The unique identification of the task may function in the case that "the input of a first computing node in performing the task is the output of at least two other first computing nodes". For example, it is assumed that the input of the first computing node 02 (hereinafter referred to as 02) includes a first execution plan and a task (assumed to be task a) sent by the first computing node 00 (hereinafter referred to as 00) and the first computing node 01 (hereinafter referred to as 01), and further includes service results generated after the 00 and 01 provide services required for completing the task a, respectively. Then, if it is assumed that 02 needs to merge the received service results related to the same execution plan when providing the service required for completing task a, 02 may determine that the service results sent by 00 and 01 respectively correspond to the same execution plan according to the unique identifiers of the tasks included in the first execution plan sent by 00 and 01 respectively, so as to merge the service results sent by 00 and 01 respectively. Of course, the unique identifier of the task may also play a role in other scenarios, and the embodiment of the present application does not limit the other scenarios.
The index information of the identifier of the first computing node that is currently about to execute the task is used to determine the identifier of the first computing node that is currently about to execute the task from the identifiers of the first computing nodes included in the first execution plan. Index information, similar to the role of a "pointer variable". For example, if it is assumed that the identifier of the first computing node included in the first execution plan forms a first computing node identifier sequence {00,11,12}, then if the corresponding index information is 1, the corresponding index information represents the 1 st element in the sequence, i.e., 00; after the 1 st element corresponding to the index information is read, the index information is changed to 2. The index information is 2, which indicates the 2 nd element in the sequence, namely 11; after the 2 elements are read, the index information is changed to 3. And in the subsequent cases, the analogy can be repeated.
In the following, some alternative implementations of the solutions provided in the examples of the present application are described.
After the dispatch server generates the first execution plan by performing step 13, the dispatch server may further send the first execution plan and the task to the first compute node to cause the first compute node to execute the task according to the first execution plan.
It has been mentioned above that the first execution plan may exist in the form of a character string. For the first computing node, after receiving the first execution plan and the task, the first execution plan in the form of a character string may be parsed to determine information included in the first execution plan.
For example, assume that the information contained in the first execution plan includes: the mapping relation between the identification of each first computing node and the service type identification of the service required by the task is completed, and the sequence of providing the service by each first computing node. Then, taking the first computing nodes comprising 00, 01 and 02 as shown in fig. 1c as an example, the operations respectively performed by the first computing nodes may be as follows:
00 as a first computing node providing services required by completing the task, and after receiving a first execution plan and the task, analyzing the first execution plan; the following analysis results 1 to 3 were obtained:
analysis result 1: the identification of each first computing node includes: 00. 01, 02;
analysis result 2: the mapping relationship between the identifier of each first computing node and the service type identifier of the service required for completing the task is shown in table 2;
table 2:
first meterIdentification of a compute node Service type identification
00 a
01 b
02 c
Analysis result 3: the sequence of the service provided by each first computing node is 00, 01 and 02 in sequence.
Based on the above analysis result 2, 00, the following operations are performed:
00 provides a type a service;
00 after generating a corresponding service result (hereinafter referred to as service result 1), sending the service result 1, the first execution plan and the task to 01 according to the analysis result 1 and the analysis result 3.
The above operation performed for 00 is explained as follows:
if the task is 'recognizing a face from a video image'; the service with the service type mark a specifically includes "filtering noise of video image". Then, 00 provides a service of type a, which may specifically include: 00 carry out noise filtering on the video image. The video image after noise filtering is the service result 1. The video image described here corresponds to the aforementioned designation data. The video image can be sent to the scheduling server by the client side and then forwarded to 00 by the scheduling server; or may be pre-stored locally at 00.
00 sending the service result 1, the first execution plan and the task to 01 according to the analysis result 1 and the analysis result 3, which may specifically include: 00 inquires the IP address corresponding to 01 from the registration information of the computing node stored in the device, and sends the service result 1, the first execution plan and the task to 01 according to the IP address.
The registration information of the compute nodes, as referred to herein, may be synchronized by the dispatch server to the compute nodes in the distributed computing system. The registration information of the computing node may include a mapping relationship between the identifier of the computing node and the IP address. After an event that a computing node is switched to an abnormal working state occurs in the distributed computing system, the scheduling server updates locally-stored registration information of the computing node in the distributed computing system; the scheduling server then synchronizes the updated registration information to the compute nodes in the distributed computing system so that each compute node maintains the most current registration information.
The registration information includes the identification of each computing node in the distributed computing system in a normal working state. When an event that a computing node is switched to an abnormal working state occurs in a distributed computing system, a scheduling server updates locally-stored registration information of the computing node, which mainly means that the scheduling server deletes the registration information of the computing node currently in the abnormal working state from the locally-stored registration information, so that the updated registration information includes the identification of each computing node in the normal working state but does not include the identification of each computing node in the abnormal working state.
The operations performed by the compute nodes 01 and 02 are described below.
For 01, after receiving a first execution plan, a task and an execution result 1 sent by 00, 01 analyzes the first execution plan; the above analysis results 1 to 3 were obtained.
Providing a service with the type b based on the analysis result 2, 01; after a corresponding service result (hereinafter referred to as a service result 2) is generated, 01 sends the service result 2, the first execution plan and the task to 02 according to the analysis result 1 and the analysis result 3.
Continuing with the above example, assume the task is "identify faces from video images"; the service of type b specifically includes "extracting image features from video images". Then, 01, providing the service of type b may specifically include: 01 from the service result 1, image features are extracted. The extracted image features are the service result 2.
For 02, after receiving 01 the sent first execution plan, task and execution result 2, 02 analyzes the first execution plan; the above analysis results 1 to 3 were obtained.
Providing the service with the type c based on the analysis result 2, 02; a corresponding service result (hereinafter referred to as service result 3) is generated.
Continuing with the above example, assume the task is "identify faces from video images"; the service of type c specifically includes "judging whether the image features extracted from the video image match with the face features, and outputting the judgment result". Then, 02 provides a service of type c, which may specifically include: 02 judges whether the service result 2 is matched with the face feature, and outputs the judgment result. The output judgment result is a service result 3.
In the above, the implementation process of the execution plan generating method provided by the embodiment of the present application is described. By adopting the method, the computing node matched with the attribute information can be determined as the first computing node for executing the task according to the attribute information of the task, and the first execution plan of the task is generated according to the identification of the first computing node, so that a mode for generating the execution plan according to the attribute information of the task is provided. Compared with the mode of solidifying the execution plan in the related technology, the first computing node can be flexibly selected and obtained according to actual needs, and therefore the problem that tasks cannot be flexibly executed due to the adoption of the mode of solidifying the execution plan in the related technology is solved.
By adopting the scheme provided by the embodiment of the application, under the scene that the scheduling server can monitor the state of the computing node according to the heartbeat connection between the scheduling server and the computing node, if the computing node is monitored to be in an abnormal working state (such as the heartbeat connection between the computing node and the scheduling server is disconnected), when an execution plan is generated, the computing node in the abnormal working state is prevented from being selected as the first computing node, and other computing nodes in a normal state are selected as the first computing node, so that the possibility of task execution failure can be reduced.
In order to further reduce the possibility of task execution failure, in the embodiment of the present application, the scheduling server may further receive a node failure notification message sent by a first computing node executing a task; based on the node failure notification message, the scheduling server may employ the flow shown in fig. 1a to regenerate an execution plan for the task (which may be referred to as a second execution plan). The contents and functions of the second execution plan and the first execution plan are similar, and the related descriptions can be referred to above, and are not described herein again. The identity of the compute node included in the second execution plan for executing the second execution plan, which may be referred to as the identity of the second compute node; the compute node determined by the dispatch server to execute the second execution plan may be referred to as a second compute node.
Based on the generated second execution plan, the dispatch server may subsequently send the task and the second execution plan to the second computing node for execution.
The following describes the above node failure notification message:
following the previous example, 00 may determine whether the registration information of 01 exists currently by querying the registration information of the computing node after obtaining the service result 1 and before sending the service result 1 to 01. In the embodiment of the present application, if the scheduling server monitors a computing node in an abnormal operating state (e.g., a failure occurs), the registration information of the computing node in the abnormal operating state may be deleted (or logged out) and the updated registration information is synchronized to other computing nodes, then, if 00 finds that no 01 registration information currently exists by querying the registration information of the computing node, it may be determined that 01 is in the abnormal operating state, so as to generate a node failure notification message and send the node failure notification message to the scheduling server. The node failure notification message may include a unique identifier of a task, so that the scheduling server knows which computing node corresponding to the task has failed. In addition, the node failure notification message may further include an identifier of 01, so that the scheduling server can determine that the corresponding execution plan is executed in failure due to which computing node fails.
In the above, from the perspective of the scheduling server, a specific implementation process of the execution plan generation method provided in the embodiment of the present application is introduced.
Hereinafter, a task execution method based on the same inventive concept as the above method will be described from the viewpoint of a computing node.
The specific implementation flowchart of the task execution method, as shown in fig. 2, includes the following steps:
step 21, the computing node receives an execution plan and a task;
the computing node may be a certain computing device, a cluster of computing devices, or an application program running on a computing device.
The execution plan may be a first execution plan generated by the flow shown in fig. 1a, or may also be an execution plan regenerated by the scheduling server, such as the second execution plan.
The task is a basic work element which can be completed by a computing node, and the task often comprises one or more instructions which can be processed by a program.
And step 22, the computing node executes the task according to the execution plan. Executing the task may specifically include: providing the services required to complete the task. For example, the computing node may determine, according to the execution plan, a service type identifier (or a task type identifier) included in the execution plan, and then, according to the service type identifier (or the task type identifier), may provide a service corresponding to the service type identifier (or the task type identifier).
For a specific implementation of step 22, reference may be made to the above description of how the computing nodes 00, 01, 02 provide services according to an execution plan in one example.
Some alternative embodiments of step 22 are described below:
the computing node may determine whether the computing node itself (i.e., local) has the capability to execute the task according to the first execution plan;
if the judgment result is yes, executing the task according to the execution plan; if the determination result is no, the process may be ended without performing step 22.
In this embodiment of the application, when the execution plan includes a type of a service required for completing a task, the computing node determines whether the local node has a capability of executing the task according to the execution plan, which may specifically include:
the computing node determines the type of service required by the task by analyzing the execution plan;
the computing node judges whether the determined type of service can be provided locally or not;
if the determined type of service can be provided locally, judging that the local has the capability;
and if the determined type of service cannot be provided locally, judging that the local is not provided with the capability.
For example, taking the computing node 00 shown in fig. 1c as an example, if the task is "identify a face from a video image", and the first execution plan includes the mapping relationship shown in table 2, then 00 may determine whether the service of type "a" can be provided locally. If the task can be provided, 00 judges that the local has the capability of executing the task according to the first execution plan; otherwise, 00 determines that the local is not capable of executing the task according to the first execution plan.
Optionally, in consideration of a situation that a task execution fails due to a computing node failure, after the step 22 is completed, the task execution method provided in the embodiment of the present application may further include the steps of:
the computing node judges whether a receiver node which is used for receiving the service result and is in a normal working state exists; the service result is an execution result obtained after the calculation result locally executes the task according to the execution plan, and is equivalent to the service result.
And if the receiver node which is used for receiving the service result and is in the normal working state does not exist, sending a node fault notification message to the scheduling server, so that the scheduling server regenerates the execution plan according to the attribute information of the task (for example, generates a second execution plan).
In an embodiment, the determining, by the computing node, whether there is a receiver node in a normal operating state for receiving the service result may specifically include:
determining an identification of a recipient node that is expected to receive the execution result by parsing the execution plan; judging whether registration information of a receiver node expected to receive an execution result is stored locally; if the registration information of the receiver node expected to receive the execution result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists. The locally stored registration information includes locally stored information of the computing nodes in a normal working state. The computing nodes can synchronize the information of each computing node in the normal working state, which is acquired by the scheduling server, to the local.
For example, following the example described above, 00 can obtain the analysis results 1 to 3 by analyzing the execution plan. According to the analysis result 3, the identifier of the receiver node expected to receive the service result 1 can be determined to be 01. Then 00 may determine whether there is registration information of 01 in the saved registration information of the compute node. If yes, judging that 01 capable of normally receiving the execution result exists; if not, it is determined that 01 that can normally receive the execution result does not exist.
As described above, in the distributed computing system, when the registration information of the computing node changes, the changed registration information may be synchronized to each computing node through the scheduling server, so that each computing node updates the locally stored registration information according to the changed registration information.
The above is an exemplary description of a task execution method provided in the embodiments of the present application.
It should be noted that, in the manner of solidifying the execution plan in the related art, since several fixed computing nodes are always used to cooperatively execute a fixed type task, what service each computing node should provide is also solidified, and therefore, it is not necessary to transfer the execution plan between different computing nodes. By adopting the method for generating the execution plan provided by the embodiment of the application, since the calculation nodes used for executing the task have certain randomness, the execution plan can be transmitted among the calculation nodes, so that the calculation nodes participating in executing the task can understand that: for the received task, what execution plan should be specifically adopted; and what role the compute node itself should play in performing the task, i.e., what services should be provided; and so on.
Example 2
The inventive concept of the present application is described in detail based on the foregoing embodiment 1, and in order to better understand the technical features, means and effects of the present application, the following further describes an execution plan generating method and a task executing method provided by the present application, thereby forming a further embodiment of the present application.
Fig. 3a is a diagram of an implementation scenario of embodiment 2 of the present application.
In the scenario shown in fig. 3a, a client (client) and a cloud are included.
The client may refer to a computing device of an entity; or, may refer to an application installed on a computing device. In embodiment 2 of the present application, the client is a party that requests execution of a task (task).
The cloud is composed of a plurality of computing nodes. A single computing node, which may be a single computing device; alternatively, a single computing node may be a cluster of computing devices consisting of at least two computing devices. In embodiment 2 of the present application, the cloud is a party that executes a task.
In a typical configuration, a computing device includes one or more Central Processing Units (CPUs), input/output interfaces, network interfaces, and memory. For example, a computer, a smart phone, a smart home device, or a smart car, etc. may be regarded as a computing device.
In the scenario shown in fig. 3a, a task requested to be performed by a client may be performed by different processors via different computing nodes. Different computing nodes can participate in the execution of tasks one by one in a pipeline mode, so that the task pipeline type work can be completed cooperatively.
Taking the task as an example of a task related to video processing (referred to as a video task for short), in order to complete the video task, steps such as preprocessing, frame analysis, post-processing, storage, retrieval, mining and the like may be performed. Wherein the preprocessing comprises acquiring code stream data; the frame analysis comprises the analysis processing of the acquired code stream data; the post-processing comprises the steps of combining or deleting the code stream data after the analysis processing; the storage comprises the storage of the processing result obtained by the post-processing; the retrieval comprises retrieving the stored processing result according to the keyword; mining includes mining the processing results to obtain useful information hidden in the processing results.
Each of these steps corresponds to the service required to complete the task described in example 1. Each step may be performed by a different computing node.
In the distributed computing system, there may be multiple computing nodes providing the same service, and in the embodiment of the present application, the computing nodes may be selected from the computing nodes providing the same service according to the load balancing principle mentioned in embodiment 1 to perform corresponding steps.
In the embodiment of the present application, the selection of the computing node may be completed by a scheduling server disposed in the cloud as shown in fig. 3 a. Specifically, the scheduling server may dynamically generate an execution plan according to the type of task and the current resource condition of the compute node to achieve maximum execution performance.
The generated execution plan may include, but is not limited to, the following parameters:
NodeIDs: a node ID array is computed. A compute node ID array is a set of compute node identifications (NodeIDs). A computing node may be a single computing device or a cluster of computing devices.
Index: the index of the current compute node. The index, which is equivalent to a pointer, points to the NodeID stored in the NodeIDs. The index points to which NodeID, indicating that the NodeID is the current computing node that should be served. After the computing node finishes providing the service, the index is updated, so that the index points to the next NodeID stored in the NodeIDs.
TaskId: and a task ID. The task ID is used to uniquely represent the task.
TaskType: and identifying the task type. And the task type identifier is used for enabling the computing nodes participating in the task execution to determine the types of the tasks participating in the task execution. Different types of tasks may correspond to the same type of service, in which case the compute node may decide for which task the compute node itself should provide services based on the TaskType.
ServiceType: an array of service types. The service type array is a collection of service type identifications. The service type identifiers in the service type array sequentially correspond to the identifiers of the computing nodes in the NodeIDs one by one. The identification of the computing node in the NodeIDs corresponds to the service type identification in the service type array, and the mapping relation between the identification of the computing node and the service type identification indicates what type of service the computing node should provide when executing the task.
DestinationInfo: destination information. The destination refers to a recipient of the final execution result of the task. The destination information may include the type of the recipient, an IP address and Port number (Port), etc. The types of the receiving party mentioned herein may include, but are not limited to: an Application (APP), a cloud, a database or data warehouse, and the like.
In this embodiment of the application, after receiving a task sent by a client, a scheduling server in the cloud determines, according to attribute information of the task included in the task, a computing node matched with the attribute information as a computing node for executing the task in the manner described in embodiment 1. Further, the scheduling server may generate an execution plan including the above parameters according to information such as the identifier of the computing node.
Specifically, the scheduling server may determine the above parameters in the following manner:
generating NodeIDs according to the determined identification of the computing node which is used for executing the task;
creating a pointer pointing to NodeIDs as an Index;
aiming at a task requested to be executed by a client, generating a task Id which uniquely represents the task;
acquiring the type TaskType of the task from the task;
determining the services required to be provided by completing the task by analyzing the task, determining the service type identification of the services required to be provided by completing the task according to the mapping relation between the services of different types and the service type identification which is established in advance, and further generating the ServiceType according to the determined service type identification;
and acquiring DestinationInfo from the task, or taking preset default destination information as DestinationInfo.
After the above parameters are determined, an execution plan including the parameters may be generated according to the parameters. Specifically, each parameter may exist in the form of a character string. The parameters are combined together, that is, the character strings are connected first to obtain a combined result as an execution plan.
It should be noted that, in a practical application scenario, the following problems may be involved:
1. the first problem is that: different tasks may require the compute node to provide different services, how should the compute node decide which services it should provide?
This is the reason why the TaskType and ServiceType are included in the execution plan.
Specifically, the TaskType may include a video processing task identifier, a picture processing task identifier, an audio processing task identifier, and the like; the ServiceType may include a voice analysis service identifier, a data retrieval service identifier, a data mining service identifier, an image recognition service identifier, and the like. What kind of service under what kind of task the computing node can provide may be sent by the computing node to the dispatch server when registering with the dispatch server corresponding information.
When the execution plan includes the TaskType and the ServiceType, the computing node participating in the task execution can determine what type of service is required by what type of task and is to be provided by the computing node according to the TaskType and the ServiceType.
2. The second problem is that: how can different computing nodes participating in the task be aware of each other's specific locations for accurate task and execution plan delivery?
As mentioned above, in order to ensure the security of the cloud, the execution plan generally does not include the IP address of the compute node, but replaces the IP address with the Identity (ID) of the compute node.
The identifier of the computing node may be reported to the scheduling server when the computing node is registered in the scheduling server.
However, after the computing node provides the service, the execution plan, the task and the generated service result are often sent to another computing node according to the execution plan. In such a case, the settlement node as the sender needs to know the IP address of the computing node as the receiver, that is, the mapping relationship between the identification of the computing node and the IP address.
In embodiment 3 of the present application, zookeeper may be used to implement data synchronization between a scheduling server and a computing node.
The zookeeper is an open source code distributed application program coordination service, is an open source implementation of Chubby of Google, and is an important component of Hadoop and Hbase. It is a software for providing consistency service for distributed system, and the provided functions include: configuration maintenance, domain name service, distributed synchronization, group service, etc.
In the embodiment of the application, zookeeper can be used for realizing distributed synchronization. Specifically, after monitoring that the registration information of the computing node stored in the scheduling server changes, the computing node may synchronize the registration information stored in the scheduling server locally at the computing node. Here, the registration information includes an identifier and an IP address. In addition, service type identification of the service which can be provided can be included.
3. The third problem is that: how can the execution chain have different forms, how to ensure the accuracy of the task transfer, execution plan and service result of the computing nodes participating in the execution of the task?
The execution chain is composed of execution nodes and execution paths. Wherein, a single computing node participating in executing the task is equivalent to a single node on the execution chain; the transmission path of the service result generated by the computing node participating in the task execution corresponds to the execution path included in the execution chain.
As shown in fig. 3b, which is a schematic diagram of several different basic aspects that may occur in an execution chain. As can be seen from fig. 3b, the basic configuration of the execution chain generally includes: branching, merging, looping and flow-through.
These 4 basic modalities are determined according to specific business requirements. Specifically, if the video processing task requires the final processing result to be stored in a plurality of data storage nodes, the corresponding execution chain is a shunt; when the image recognition in the video processing task is subdivided into face recognition and vehicle recognition, different computing nodes are required to perform the face recognition and the vehicle recognition respectively, and the image to be subjected to the face recognition and the image to be subjected to the vehicle recognition are input to the different computing nodes respectively, then the corresponding execution chain is a shunt circuit; and after the task requires that different computing nodes respectively provide the services, uniformly processing the service results respectively generated by the different computing nodes, and combining the corresponding execution chains.
After the computing nodes for executing the tasks are determined, the execution chain may be represented by a tree graph or the like according to the identifiers of the computing nodes (the computing nodes are equivalent to a single node on the execution chain) and the sequence of the tasks executed by the computing nodes (the transmission path of the service result may be determined according to the sequence, and is equivalent to the execution path included in the execution chain), and the tree graph is used as a part of the execution plan. Wherein, the nodes of the tree-like graph correspond to the identifiers of the calculation nodes; and the trunk portion of the tree corresponds to the transmission path. The compute node that receives the execution plan may determine the execution chain by parsing the dendrogram.
4. The fourth problem is that: how does the computing node involved in performing the task move out of the distributed computing system, or fail?
It is assumed that after the service is provided by the computing node 1, the generated service result needs to be sent to the computing node 2. However, the computing node 2 has been moved out of the distributed computing system where the computing node 1 and the scheduling server are located, or the computing node 2 has currently failed, then the computing node 1 sends a failure notification message to the scheduling server to trigger the scheduling server to regenerate the execution plan.
In the embodiment of the present application, the scheduling server may monitor whether the computing node 2 fails according to the heartbeat connection between the scheduling server and the computing node 2. If the scheduling server monitors that the computing node 2 fails, the scheduling server deletes the registration information of the computing node 2 and synchronizes the processing result to each computing node which does not fail in the distributed computing system, so that the computing node 1 can know that the computing node 2 fails.
In the following, how to execute a task according to an execution plan after the execution plan is generated in one embodiment will be described with reference to fig. 3c of the specification.
It should be noted that, before generating the execution plan, the following steps may be performed:
1. each computing node in the distributed computing system is registered to the scheduling server in a mode of sending a registration request to the scheduling server;
the registration request comprises registration information of the computing node requesting registration.
The registration information of the computing node includes an identifier (NodeID) of the computing node, an IP address, and an identifier of a service type of a service that the computing node can provide, and the like.
2. Each computing node respectively acquires registration information of all computing nodes stored by a scheduling server;
3. each computing node monitors the scheduling server in real time, and when the registration information stored by the scheduling server is found to be changed, the registration information locally stored by the computing node is updated immediately according to the changed registration information;
4. each computing node reports the resource usage of each computing node to the scheduling server periodically (for example, with a period of 10 minutes).
By executing the steps 1 to 4, the scheduling server can acquire the registration information and the resource use condition of each computing node, so as to generate an execution plan according to the information in the following. Specifically, the computing nodes capable of executing the task may be determined based on the attribute information of the task and the registration information of the computing nodes. If there are a plurality of computing nodes capable of performing the task, the computing nodes capable of providing the same service may be further selected from the computing nodes providing the same service as the computing nodes performing the task according to the resource usage request of the computing nodes providing the same service.
And after the computing nodes for executing the tasks are determined, generating an execution plan. How to generate the execution plan may refer to the related descriptions in embodiment 1 and embodiment 2, and details are not repeated here.
After the execution plan is generated, please refer to fig. 3c, the following steps may be adopted to execute the task:
step 31: analyzing the generated execution plan, and determining index information of the identifier of the first computing node which is currently about to execute the task, so as to determine the identifier of the corresponding computing node according to the index information;
the device that analyzes the generated execution plan for the first time is generally a scheduling server; after the scheduling server sends the execution plan to the compute node, the compute node parses the execution plan.
Step 32: inquiring whether the IP address corresponding to the determined identification of the computing node exists or not according to locally stored registration information; if the IP address is not inquired, sending a fault notification message to the scheduling server to trigger the scheduling server to regenerate the execution plan; if the IP address is found, go to step 33;
step 33: sending the execution plan and the task to a computing node corresponding to the IP address according to the inquired IP address;
of course, if the execution agent in steps 31 to 33 is a computing node, the service result obtained by the execution agent may be transmitted to the computing node corresponding to the IP address found in the query in step 33.
Step 34: after receiving the execution plan and the task, the computing node determines and computes the service which needs to be provided currently by analyzing the execution plan and judges whether the service can be provided locally or not; if not, sending a notification message to the scheduling server to trigger the scheduling server to regenerate the execution plan; if yes, go to step 35;
step 35: the computing node provides the service, and after the service is provided, the index of the current computing node is updated, and then step 31 is executed.
It should be noted that, updating the index of the current computing node here refers to updating the index of the current computing node to point to the identifier of another computing node located after the identifier of the computing node that has provided the service in the NodeIDs.
By adopting the scheme provided by the embodiment 3 of the application, the following beneficial effects can be brought:
the service logic realized by the cloud service is given according to the form of an execution plan;
the execution plan can be dynamically generated according to the resource occupation condition and the task type of the computing node, the flexible generation of the execution plan is realized, and when the computing node with the smaller resource occupancy rate is selected as the computing node for executing the task, the generated execution plan can achieve higher execution efficiency;
the task execution process has certain fault tolerance, and the execution plan can be regenerated when the computing node fails, so that the probability of successful execution of the task is high;
different execution chain morphologies can be supported;
compared with the mode that no execution plan is provided, the method for scheduling the computing nodes to process the service results after the computing nodes provide services every time can avoid frequently scheduling the computing nodes by executing tasks according to the execution plan. Therefore, under the condition that the execution chain is longer, the task execution efficiency can be obviously improved by adopting the scheme.
Example 3
To solve the problem of low flexibility of the manner of solidifying the execution plan adopted in the related art, embodiment 3 of the present application provides an execution plan generating apparatus for the execution plan generating method shown in fig. 1 a. The specific structural diagram of the apparatus is shown in fig. 4, and includes an information determining unit 41, a node determining unit 42, and a plan generating unit 43. The functional units comprised by the device are described in detail below:
an information determination unit 41 for determining attribute information of the task;
a node determining unit 42, configured to determine, according to the attribute information determined by the information determining unit 41, a computing node matched with the attribute information as a first computing node for executing the task;
and a plan generating unit 43, configured to generate a first execution plan of the task according to the identifier of the first computing node determined by the node determining unit 42. The first execution plan includes an identification of the first compute node.
To ensure load balancing of the distributed computing system, the node determining unit 42 may be specifically configured to: according to the resource occupation condition of the first computing node and the attribute information determined by the information determining unit 41, the first computing node which is matched with the attribute information and the resource occupation condition of which meets the preset resource occupation requirement is determined as the first computing node for executing the task.
In an implementation manner, in order to enable the first computing node to execute the task according to the first execution plan, the apparatus provided in the embodiment of the present application may further include a sending unit. The sending unit is configured to send the first execution plan and the task to the first computing node that executes the task after the plan generating unit 43 generates the first execution plan, so that the first computing node that executes the task according to the first execution plan.
In an implementation manner, to reduce the possibility of task execution failure, the apparatus provided in the embodiment of the present application may further include a message receiving unit. The message receiving unit is used for receiving a node fault notification message sent by the first computing node executing the task after the sending unit sends the first execution plan and the task to the first computing node executing the task.
When the apparatus further comprises a message receiving unit, the plan generating unit 43 is further configured to generate a second execution plan according to the attribute information based on the node failure notification message received by the message receiving unit. Wherein the second execution plan includes an identification of a second compute node executing the task.
In one embodiment, the attribute information includes at least one of:
type, expected execution time, priority.
In one embodiment, the first execution plan further includes at least one of the following information:
the type of task;
the type of service required to complete the task;
the order in which each first compute node provides service;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently to execute the task.
In one implementation, the apparatus provided in the embodiments of the present application may be applied to a distributed computing system. In such a case, the apparatus may further include a synchronization unit. The synchronization unit is specifically configured to synchronize the changed registration information to the computing nodes in the distributed computing system when the registration information of the computing nodes in the distributed computing system changes.
By adopting the device for generating the execution plan, which is provided by the embodiment of the application, because the computing node matched with the attribute information can be determined according to the attribute information of the task to serve as the first computing node for executing the task, and the first execution plan of the task is generated according to the identifier of the first computing node, a mode for generating the execution plan according to the attribute information of the task is provided, the solidification of the execution plan is avoided, and the problem that the mode for solidifying the execution plan in the related technology is low in flexibility is solved.
The apparatus for generating the execution plan according to the embodiment of the present application may be included in a scheduling server in a distributed computing system.
The embodiment 3 of the present application provides a task execution device for the same inventive concept as the task execution method shown in fig. 2. The specific structure of the device is schematically shown in fig. 5, and includes a receiving unit 51 and an executing unit 52. The functional units comprised by the device are described in detail below.
A receiving unit 51 for receiving a first execution plan and a task;
an execution unit 52, configured to execute the task according to the first execution plan.
In an embodiment, the execution unit 52 may specifically be configured to: judging whether the local has the capability of executing the task according to the first execution plan or not based on the service type provided by the local; and when the judgment result is yes, executing the task according to the first execution plan.
In an embodiment, the execution unit may be specifically configured to:
when the first execution plan comprises the type of the service required by the task, determining the type of the service required by the task by analyzing the first execution plan;
judging whether the type of service can be provided locally;
if the service of the type can be provided locally, judging that the local has the capability;
and if the local service can not be provided with the type of service, judging that the local service does not have the capability.
In an embodiment, in order to improve the probability of successful execution of a task, the task execution device provided in an embodiment of the present application may further include:
a message sending unit, configured to determine whether there is a receiver node in a normal working state for receiving a service result after the execution unit 52 executes the task; the service result is an execution result obtained after the task is executed according to the first execution plan locally;
and if the receiver node which is used for receiving the service result and is in the normal working state does not exist, sending a node fault notification message to the scheduling server so that the scheduling server generates a second execution plan according to the attribute information of the task.
In an embodiment, the message sending unit may be specifically configured to:
determining, by parsing the first execution plan, an identification of a recipient node that is expected to receive the service outcome;
judging whether registration information of the receiver node expected to receive the service result is stored locally; if the registration information of the receiver node expected to receive the service result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists.
In one embodiment, the apparatus may be applied to a compute node in a distributed computing system. In such a case, the apparatus may further include an updating unit. The updating unit is used for updating the locally stored registration information according to the changed registration information when the registration information of the computing nodes in the distributed computing system changes.
In one embodiment, the apparatus may further comprise:
and a plan sending unit, configured to send the service result, the first execution plan, and the task to a receiver node after the execution unit 52 executes the task. And the service result is an execution result obtained after the task is executed according to the first execution plan locally.
In one embodiment, when the apparatus is applied to a computing node in a distributed computing system, the apparatus may further include a resource status sending unit. The resource condition sending unit is used for sending the local resource use condition of the computing node to a scheduling server in the distributed system.
The task execution device provided by the embodiment of the application can realize the execution plan generated according to the execution plan generation scheme provided by the embodiment of the application, and successfully execute the task.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (22)

1. A method of generating an execution plan, comprising:
determining attribute information of the task;
determining a computing node matched with the attribute information according to the attribute information, and using the computing node as a first computing node for executing the task;
generating a first execution plan of the task according to the identification of the first computing node; the first execution plan including an identification of the first compute node;
sending the first execution plan and the task to a first computing node executing the task, so that the first computing node executes the task according to the first execution plan;
when a node fault notification message sent by the first computing node is received, regenerating an execution plan according to the attribute information of the task; the node fault notification message is sent by the first computing node under the condition that a receiver node which is used for receiving the service result and is in a normal working state does not exist; and the service result is an execution result obtained after the first computing node executes the task.
2. The method of claim 1, wherein determining, from the attribute information, a first compute node that matches the attribute information as a first compute node to perform the task comprises:
and determining the first computing node which is matched with the attribute information and the resource occupation condition of the first computing node according to the attribute information and the resource occupation condition of the first computing node, wherein the resource occupation condition meets the preset resource occupation requirement, and using the first computing node as a first computing node for executing the task.
3. The method of claim 1, wherein after sending the first execution plan and the task to a first computing node executing the task, the method further comprises:
receiving a node fault notification message sent by a first computing node executing the task;
generating a second execution plan according to the attribute information based on the node fault notification message;
the second execution plan includes an identification of a second computing node that executes the task.
4. A method according to any of claims 1 to 3, wherein the first execution plan further comprises at least one of the following information:
a type of the task;
the type of service required to complete the task;
the order in which each first compute node provides service;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently to execute the task.
5. The method of any of claims 1 to 3, wherein the method is applied to a distributed computing system; then
The method further comprises the following steps:
after an event that a computing node is switched to an abnormal working state occurs in the distributed computing system, updating stored registration information of the computing node in the distributed computing system; the registration information comprises the identification of each computing node in a normal working state in the distributed computing system;
synchronizing the updated registration information to computing nodes in the distributed computing system.
6. A task execution method applied to a computing node in a distributed computing system is characterized by comprising the following steps:
receiving a first execution plan and a task; the first execution plan including an identification of a first compute node; the first computing node comprises a computing node which is determined according to the attribute information and is matched with the attribute information;
executing the task according to the first execution plan to obtain a service result;
judging whether a receiver node which is used for receiving the service result and is in a normal working state exists or not; if not, sending a node fault notification message to a scheduling server so that the scheduling server regenerates an execution plan according to the attribute information of the task.
7. The task execution method of claim 6, wherein executing the task according to the first execution plan comprises:
judging whether the local has the capability of executing the task according to the first execution plan or not based on the service type provided by the local;
and when the judgment result is yes, executing the task according to the first execution plan.
8. The task execution method of claim 6, wherein determining whether there is a receiver node in a normal operating state for receiving the service result comprises:
determining, by parsing the first execution plan, an identification of a recipient node that is expected to receive the service outcome;
judging whether registration information of the receiver node expected to receive the service result is stored locally; if the registration information of the receiver node expected to receive the service result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists;
the locally stored registration information includes locally stored information of the computing nodes in a normal working state.
9. The task execution method of claim 8, wherein the method is applied to compute nodes in a distributed computing system; then
The method further comprises the following steps:
and when the registration information of the computing nodes in the distributed computing system changes, the changed registration information is obtained through the scheduling server, and the locally stored registration information is updated according to the changed registration information.
10. The method of claim 6, wherein after performing the task, the method further comprises:
sending a service result, the first execution plan and the task to a receiver node;
and the service result is an execution result obtained after the task is executed according to the first execution plan locally.
11. The method of claim 6, wherein the first computing node specifically comprises: the computing node is determined according to the attribute information and the resource occupation condition of the computing node, is matched with the attribute information, and meets the preset resource occupation requirement; then
The method further comprises the following steps:
and sending the local resource use condition to a scheduling server in the distributed system.
12. An execution plan generation apparatus, comprising:
an information determination unit for determining attribute information of a task;
the node determining unit is used for determining a computing node matched with the attribute information according to the attribute information and used as a first computing node for executing the task;
the plan generating unit is used for generating a first execution plan of the task according to the identification of the first computing node; the first execution plan including an identification of the first compute node;
a sending unit, configured to send the first execution plan and the task to a first computing node that executes the task, so that the first computing node executes the task according to the first execution plan;
when a node fault notification message sent by the first computing node is received, regenerating an execution plan according to the attribute information of the task; the node fault notification message is sent by the first computing node under the condition that a receiver node which is used for receiving the service result and is in a normal working state does not exist; and the service result is an execution result obtained after the first computing node executes the task.
13. The apparatus of claim 12, wherein the node determining unit is to:
and determining the first computing node which is matched with the attribute information and the resource occupation condition of the first computing node according to the attribute information and the resource occupation condition of the first computing node, wherein the resource occupation condition meets the preset resource occupation requirement, and using the first computing node as a first computing node for executing the task.
14. The apparatus of claim 12, wherein the apparatus further comprises:
a message receiving unit, configured to receive a node fault notification message sent by a first computing node executing the task after the sending unit sends the first execution plan and the task to the first computing node executing the task;
the plan generating unit is further configured to generate a second execution plan according to the attribute information based on the node fault notification message received by the message receiving unit;
the second execution plan includes an identification of a second computing node that executes the task.
15. The apparatus of any of claims 12 to 14, wherein the first execution plan further comprises at least one of:
a type of the task;
the type of service required to complete the task;
the order in which each first compute node provides service;
the receiver information of the task completion result corresponding to the task;
a unique identification of the task;
index information of an identification of a first compute node currently to execute the task.
16. The apparatus according to any one of claims 12 to 14, wherein the apparatus is applied to a distributed computing system; then
The device further comprises:
the synchronization unit is used for updating locally stored registration information of the computing nodes in the distributed computing system after an event that the computing nodes are switched to an abnormal working state occurs in the distributed computing system; the registration information comprises the identification of each computing node in a normal working state in the distributed computing system; synchronizing the updated registration information to computing nodes in the distributed computing system.
17. A task execution apparatus, comprising:
a receiving unit configured to receive a first execution plan and a task; the first execution plan including an identification of a first compute node; the first computing node comprises a computing node which is determined according to the attribute information and is matched with the attribute information;
the execution unit is used for executing the task according to the first execution plan to obtain a service result;
the message sending unit is used for judging whether a receiver node which is used for receiving the service result and is in a normal working state exists; if not, sending a node fault notification message to a scheduling server so that the scheduling server regenerates an execution plan according to the attribute information of the task.
18. The task execution device of claim 17, wherein the execution unit is to:
judging whether the local has the capability of executing the task according to the first execution plan or not based on the service type provided by the local;
and when the judgment result is yes, executing the task according to the first execution plan.
19. The task execution apparatus of claim 17, wherein the message sending unit is to:
determining, by parsing the first execution plan, an identification of a recipient node that is expected to receive the service outcome;
judging whether registration information of the receiver node expected to receive the service result is stored locally; if the registration information of the receiver node expected to receive the service result is stored, judging that the receiver node which is used for receiving the service result and is in a normal working state exists; otherwise, judging that no receiver node which is used for receiving the service result and is in a normal working state exists;
the locally stored registration information includes locally stored information of the computing nodes in a normal working state.
20. The task execution apparatus of claim 19, wherein the apparatus is applied to a compute node in a distributed computing system; then
The device further comprises:
and the updating unit is used for acquiring the changed registration information through the scheduling server when the registration information of the computing nodes in the distributed computing system changes, and updating the locally stored registration information according to the changed registration information.
21. The apparatus of claim 17, wherein the apparatus further comprises:
the plan sending unit is used for sending a service result, the first execution plan and the task to a receiver node after the execution unit executes the task;
and the service result is an execution result obtained after the task is executed according to the first execution plan locally.
22. The apparatus of claim 17, wherein the apparatus is applied to a compute node in a distributed computing system; the first computing node specifically includes: the computing node is determined according to the attribute information and the resource occupation condition of the computing node, is matched with the attribute information, and meets the preset resource occupation requirement; then
The device further comprises:
and the resource condition sending unit is used for sending the local resource use condition to the scheduling server in the distributed system.
CN201610980192.6A 2016-11-08 2016-11-08 Execution plan generation method, task execution method and device Active CN108062243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610980192.6A CN108062243B (en) 2016-11-08 2016-11-08 Execution plan generation method, task execution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610980192.6A CN108062243B (en) 2016-11-08 2016-11-08 Execution plan generation method, task execution method and device

Publications (2)

Publication Number Publication Date
CN108062243A CN108062243A (en) 2018-05-22
CN108062243B true CN108062243B (en) 2022-01-04

Family

ID=62136680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610980192.6A Active CN108062243B (en) 2016-11-08 2016-11-08 Execution plan generation method, task execution method and device

Country Status (1)

Country Link
CN (1) CN108062243B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240823B (en) * 2018-08-10 2019-08-02 北京小蓦机器人技术有限公司 The method, apparatus and readable storage medium storing program for executing of multiterminal linkage processing task
CN109656699A (en) * 2018-12-14 2019-04-19 平安医疗健康管理股份有限公司 Distributed computing method, device, system, equipment and readable storage medium storing program for executing
CN109656685A (en) * 2018-12-14 2019-04-19 深圳市网心科技有限公司 Container resource regulating method and system, server and computer readable storage medium
CN111342986B (en) * 2018-12-19 2022-09-16 杭州海康威视系统技术有限公司 Distributed node management method and device, distributed system and storage medium
CN110163250B (en) * 2019-04-10 2023-10-24 创新先进技术有限公司 Image desensitization processing system, method and device based on distributed scheduling
CN111522630B (en) * 2020-04-30 2021-04-06 北京江融信科技有限公司 Method and system for executing planned tasks based on batch dispatching center
CN112905350A (en) * 2021-03-22 2021-06-04 北京市商汤科技开发有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113485765B (en) * 2021-07-07 2023-09-22 上海顺舟智能科技股份有限公司 Control strategy configuration method, device, equipment and medium of intelligent equipment of Internet of things
CN114205842B (en) * 2021-11-03 2024-02-02 深圳市九洲电器有限公司 Device cooperation synchronization method, system, device, terminal device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2275062T3 (en) * 2002-05-03 2007-06-01 Thales STITCHING ORDERING PROCEDURE FOR A MULTIFUNCTION RADAR, A SONAR OR FOR A LEADER.
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system
CN103870317A (en) * 2012-12-10 2014-06-18 中兴通讯股份有限公司 Task scheduling method and system in cloud computing
CN104239141A (en) * 2014-09-05 2014-12-24 北京邮电大学 Task optimized-scheduling method in data center on basis of critical paths of workflow
CN104461752A (en) * 2014-11-21 2015-03-25 浙江宇视科技有限公司 Two-level fault-tolerant multimedia distributed task processing method
CN105049268A (en) * 2015-08-28 2015-11-11 东方网力科技股份有限公司 Distributed computing resource allocation system and task processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2275062T3 (en) * 2002-05-03 2007-06-01 Thales STITCHING ORDERING PROCEDURE FOR A MULTIFUNCTION RADAR, A SONAR OR FOR A LEADER.
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system
CN103870317A (en) * 2012-12-10 2014-06-18 中兴通讯股份有限公司 Task scheduling method and system in cloud computing
CN104239141A (en) * 2014-09-05 2014-12-24 北京邮电大学 Task optimized-scheduling method in data center on basis of critical paths of workflow
CN104461752A (en) * 2014-11-21 2015-03-25 浙江宇视科技有限公司 Two-level fault-tolerant multimedia distributed task processing method
CN105049268A (en) * 2015-08-28 2015-11-11 东方网力科技股份有限公司 Distributed computing resource allocation system and task processing method

Also Published As

Publication number Publication date
CN108062243A (en) 2018-05-22

Similar Documents

Publication Publication Date Title
CN108062243B (en) Execution plan generation method, task execution method and device
CN107332876B (en) Method and device for synchronizing block chain state
US11411897B2 (en) Communication method and communication apparatus for message queue telemetry transport
US10623516B2 (en) Data cloud storage system, client terminal, storage server and application method
CN109923847B (en) Discovery method, device, equipment and storage medium for call link
US11822453B2 (en) Methods and systems for status determination
CN111966289B (en) Partition optimization method and system based on Kafka cluster
CN108512672B (en) Service arranging method, service management method and device
CN110109766B (en) Data interaction method and device based on cross-department and cross-platform data sharing exchange
CN109413202B (en) System and method for sorting block chain transaction information
US20200142759A1 (en) Rest gateway for messaging
CN113596078A (en) Service problem positioning method and device
CN109618187B (en) Video data acquisition method and device
CN111447143A (en) Business service data transmission method and device, computer equipment and storage medium
CN106790354B (en) Communication method and device for preventing data congestion
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
CN117354312A (en) Access request processing method, device, system, computer equipment and storage medium
CN112019604A (en) Edge data transmission method and system
CN112579639A (en) Data processing method and device, electronic equipment and storage medium
CN109347766A (en) A kind of method and device of scheduling of resource
CN107493308B (en) Method and device for sending message and distributed equipment cluster system
CN112788054B (en) Internet of things data processing method, system and equipment
CN115396494A (en) Real-time monitoring method and system based on stream computing
US11190432B2 (en) Method and first node for managing transmission of probe messages
CN108093147B (en) Distributed multi-stage scheduling method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant