CN113553178A - Task processing method and device and electronic equipment - Google Patents

Task processing method and device and electronic equipment Download PDF

Info

Publication number
CN113553178A
CN113553178A CN202110803833.1A CN202110803833A CN113553178A CN 113553178 A CN113553178 A CN 113553178A CN 202110803833 A CN202110803833 A CN 202110803833A CN 113553178 A CN113553178 A CN 113553178A
Authority
CN
China
Prior art keywords
target
task
data processing
node
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110803833.1A
Other languages
Chinese (zh)
Inventor
黄文楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110803833.1A priority Critical patent/CN113553178A/en
Publication of CN113553178A publication Critical patent/CN113553178A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Abstract

The embodiment of the disclosure discloses a task processing method and device and electronic equipment. One embodiment of the method comprises: in response to receiving a target data processing task sent by the integrated application, selecting an idle computing node from the idle computing node set as a target computing node; removing the target compute node from the set of free compute nodes; processing the target data processing task using the target compute node; in response to determining that the target computing node is finished processing the target data processing task, adding the target computing node to the set of idle computing nodes. Thus, a new task processing mode can be provided.

Description

Task processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a task processing method and apparatus, and an electronic device.
Background
An iPaaS (Integration Platform as a Service) is used as an Integration Service providing Platform, can support an enterprise to quickly construct a system Integration model, reasonably manage data and easily provide a converged Platform for any user needing Integration, and realizes Integration requirements of lightweight, full scene and dynamic expansion among all business systems of the enterprise through UI Integration, application Integration and data Integration.
After the integrated application is built, the user deploys the integrated application. The current common deployment scheme is to enable a user to build a cluster on a cloud/data center, deploy a single integrated application to the cluster, and support the operation of the integrated application. All tasks of the integrated application are load balanced on the cluster and are executed by all the instances of the cluster in a sharing mode.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a task processing method, where the method includes: in response to receiving a target data processing task sent by the integrated application, selecting an idle computing node from the idle computing node set as a target computing node; removing the target compute node from the set of idle compute nodes; processing the target data processing task using the target compute node; in response to determining that the target computing node is finished processing the target data processing task, adding the target computing node to the set of idle computing nodes.
In a second aspect, an embodiment of the present disclosure provides a task processing apparatus, including a selecting unit, configured to select an idle computing node from an idle computing node set as a target computing node in response to receiving a target data processing task sent by an integrated application; a removing unit for removing the target compute node from the set of idle compute nodes; a computing unit for processing the target data processing task using the target computing node; an adding unit, configured to add the target compute node to the set of idle compute nodes in response to determining that the target compute node completes processing the target data processing task.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the task processing method according to the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the task processing method according to the first aspect.
It should be noted that, with the task processing method, the computing node in the idle state may be selected to execute the target data processing task, and the selected target computing node is removed from the idle computing node set; and when the target computing node completes the target data processing task, the mounting value of the target computing node is gathered. Therefore, the target computing node can not receive other computing tasks while processing the target data processing task, mutual influence between the target data processing task and other computing tasks is avoided, the execution success rate of the target data processing task is improved, and the reliability of task processing is improved.
In contrast, in some related art, task scheduling is performed in a load balancing manner among the computing nodes (or instances). The result of load balancing may be that one compute node is simultaneously carrying out multiple tasks. In this case, it is equivalent to multiplexing the computing nodes between tasks, which causes mutual influence between tasks and is low in reliability. In some severe cases, for example, the task with more occupied resources (e.g., CPU and/or memory) may, in extreme cases, break the system stability, cause the computing node to crash, and cause unrelated tasks to fail to execute.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram for one embodiment of a task processing method according to the present disclosure;
FIG. 2 is a schematic diagram of one application scenario of a task processing method according to the present disclosure;
FIG. 3 is a schematic diagram of another application scenario of a task processing method according to the present disclosure;
FIG. 4 is a schematic block diagram of one embodiment of a task processing device according to the present disclosure;
FIG. 5 is an exemplary system architecture to which the task processing method of one embodiment of the present disclosure may be applied;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to FIG. 1, a flow diagram of one embodiment of a task processing method according to the present disclosure is shown. The task processing method is applied to the server. The task processing method as shown in fig. 1 includes the following steps:
step 101, in response to receiving a target data processing task sent by an integrated application, selecting an idle computing node from an idle computing node set as a target computing node.
In this embodiment, an execution subject (e.g., a server) of the task processing method may select an idle computing node from the set of idle computing nodes in response to receiving a target data processing task sent by the integrated application, and use the selected idle computing node as a target computing node.
Here, application integration can organically integrate different application software and systems built with different schemes based on various platforms into a single system which is seamless, parallel and easy to access, and make them perform business processing and information sharing as a whole. The application integration is composed of three levels of database, business logic and user interface. The various applications that are integrated may be referred to as integrated applications.
Here, the cloud server may receive a target data processing task sent by the integrated application, for example, the cloud server may include a task triggering system and a task distribution system, the task triggering system may receive the target data processing task, and the task distribution system (for example, a distributed coordination system such as zoo keeper/etcd) may select an idle computing node and distribute the target data processing task.
Here, the idle computing nodes in the idle computing node set may be computing nodes in an idle state. The idle state may include a state in which no task is performed.
Here, the method for selecting the idle computing node may be implemented in various manners, and is not limited herein.
For example, the computing nodes may be randomly selected from a set of idle computing nodes, or may be selected from the set of idle computing nodes in a polling manner.
In some application scenarios, the computing resources in the computing server are divided to obtain a plurality of computing nodes. After the computing nodes in the computing server are started, the computing nodes can be mounted in an idle computing node pool (i.e., an idle computing node set) of the distributed coordination system.
Here, the selected free computing node is taken as the target computing node for convenience of description. In fact, the selected free compute node will no longer be a free compute node after the selection until the task execution is complete. Therefore, the description mode of the target computing node is probably more consistent with the actual situation.
Step 102, removing the target computing node from the idle computing node set.
In this embodiment, the execution subject may remove the target computing node from the set of idle computing nodes.
In some application scenarios, after a compute node in a compute server is started, an own task allocation (assign) node may be created, and a worker node (worker) representing itself may be created. Upon determining to assign the target data processing task to the target computing node, the target computing node may be removed from the set of idle computing nodes in one transaction and the target data processing task is distributed to the task assigning node of the computing node.
And 103, processing the target data processing task by using the target computing node.
In this embodiment, the target computing node may begin executing the task after determining that the task is assigned.
In some application scenarios, the task allocation node may start executing the task after monitoring that it is allocated with the task. After the task is completed, the distributed information recorded by the task distribution node can be removed, and the distributed information is added back to the idle computing node set. It should be noted that two actions can be added and removed.
Step 104, in response to determining that the target computing node is finished processing the target data processing task, adding the target computing node to the set of idle computing nodes.
In this embodiment, if the target computing node completes processing the target data processing task, the target computing node may be added back to the set of idle computing nodes.
It should be noted that, in the task processing method provided in this embodiment, a computing node in an idle state may be selected to execute a target data processing task, and the selected target computing node is removed from an idle computing node set; and when the target computing node completes the target data processing task, the mounting value of the target computing node is gathered. Therefore, the target computing node can not receive other computing tasks while processing the target data processing task, mutual influence between the target data processing task and other computing tasks is avoided, the execution success rate of the target data processing task is improved, and the reliability of task processing is improved.
In contrast, in some related art, task scheduling is performed in a load balancing manner among the computing nodes (or instances). The result of load balancing may be that one compute node is simultaneously carrying out multiple tasks. In this case, it is equivalent to multiplexing the computing nodes between tasks, which causes mutual influence between tasks and is low in reliability. In some severe cases, for example, the task with more occupied resources (e.g., CPU and/or memory) may, in extreme cases, break the system stability, cause the computing node to crash, and cause unrelated tasks to fail to execute.
In some embodiments, the method may further include: and isolating the computing resources in the computing server to obtain at least two computing nodes.
Here, the computing server may perform resource isolation using various techniques, and the isolated resources may be used as individual computing nodes. Alternatively, a compute node may also refer to a compute node as a logical compute node.
It should be noted that, the computing resources between the computing nodes are isolated from each other, so that the interaction between the computing nodes can be avoided, and the interaction between the tasks run by the computing nodes can also be avoided. Therefore, the task execution failure caused by the interference of computing resources can be reduced, and the stability of processing tasks by the computing server is improved.
In some application scenarios, the step 101 may include determining a computing cluster from at least one computing cluster; obtaining an idle computing node list of the determined computing cluster; and selecting a computing node from the free computing node list.
Here, the number of computing clusters may be one or more, for example, the computing servers may include computing cluster a, computing cluster b, and computing cluster c. A computing cluster may include one or more computing nodes.
In some embodiments, the isolating the computing resources in the computing server to obtain at least two computing nodes may include: and isolating resources in the computing server based on a container technology to obtain at least two computing nodes.
It is understood that the container is essentially a process, and the process is isolated from the process so that the container and the container do not affect each other. As an example, when a container is started (i.e. a process is created), isolation of the container is achieved through Namespace technology, and resource control of the container is achieved through Cgroups.
For example, a container process may be established first, isolating resources such as storage, network, process, user, hostname, domain name, inter-process communication, etc. After the container process is created, cgroups (linux content group) may be set to limit the upper limit of resources (such as CPU, memory, network, etc.) that can be used by a process. Thus, the resources (e.g., CPU, memory) consumed and occupied by the process during its operation cannot be consumed by other host processes or other container processes.
It should be noted that, by using the container technology, the computing nodes are obtained by isolating the resources, so that mutual isolation between the computing nodes can be realized, the isolation degree between the computing nodes can be improved, and mutual interference between the computing nodes is avoided.
In some embodiments, the method may further include: in response to assigning a target data processing task to the target compute node, modifying state information associated with the target compute node to a first state value.
Here, the first status value may include occupancy status indication information and a target data processing task identification.
Here, the occupation state indication information may indicate that the computing node is in an occupied state.
Here, the target data processing task identification may include, but is not limited to, at least one of: task name, task data, etc.
In some application scenarios, the computing node may create its own task allocation node and listen to its own task allocation node. If it is monitored that the task distribution node receives the data processing task, the received data processing task can be executed. Information in the task distribution node may serve as state information associated with the compute node. If the state in the task distribution node changes, the computing node can determine that the computing node is distributed with the task and then start task processing.
It should be noted that, when determining to allocate the target data processing task to the target computing node, the state information associated with the target computing node is modified in time, so that it can be determined in time whether the computing node is processing the task.
In some embodiments, the method further comprises: in response to the target data processing task being assigned to the target computing node, requesting, to an electronic device storing a task state of the target data processing task, that the task state be modified from an original task state value to a processing task state value; and if the request modification fails, stopping processing the target data processing task.
For example, if the computing node a receives a task first, it applies to the database to change the task from the original state to the ongoing state; then, due to a system error, the B computing node also receives the task, that is, a task is distributed to two different computing nodes. In this case, the B computing node applies to the database for changing the task state, and the database will give an application result of failed modification. And under the condition that the B computing node does not successfully apply for changing the task state, the task is considered to be repeatedly distributed, and the task is not processed.
Thus, it can be ensured that the task is not repeatedly processed by two or more compute nodes. Therefore, the task is guaranteed to be processed by the only computing node, and the waste of computing resources is avoided.
In some embodiments, the step 104 may further include: in response to determining that processing of a target data processing task by a target compute node is complete, modifying state information associated with the target compute node to a second state value, wherein the second state value includes idle state indication information.
Here, the target computing node may learn its task execution. If the computing node confirms that the target data processing task is executed, the execution body may modify the state information associated with the target computing node to a second state value including idle state indication information, and may move the target computing node back to the set of idle computing nodes.
It should be noted that, by modifying the state information associated with the target computing node into the second state information in time, the state information can accurately indicate the task processing state of the target information node.
In some embodiments, the above method further comprises: monitoring the life state of the target computing node; in response to determining that the target compute node is in a dead state, determining whether the target compute node has an unprocessed completed data processing task based on the state information of the target compute node.
Here, the life state may include a death state.
In some application scenarios, the life state may also include a survival state.
In some application scenarios, a monitoring system in the cloud server can continuously monitor the life condition of a computing node (worker). When the computing node is dead, whether the task is processed when the computing node (worker) is dead is identified according to the task allocation (assign) condition of the computing node, and corresponding processing is carried out.
In some application scenarios, the master monitoring node may be set based on a distributed lock. Therefore, only one computing node can be used as the main monitoring node at the same time, the uniqueness of the main monitoring node can be guaranteed, and the monitoring accuracy is improved.
In some application scenarios, a master-slave monitoring node switching mechanism can be set based on a distributed lock, so that the high reliability of a monitoring system is ensured.
Here, the state value of the state information may be a first state value or a second state value. The first state value may indicate that the target computing node is processing a task and the second state value may indicate that the target computing node is in an idle state. Thus, it may be determined whether the target compute node has an unprocessed completed data processing task based on the state information.
In some embodiments, the determining whether the target computing node has an unprocessed completed data processing task according to the state information of the target computing node may include: in response to determining that the state information of the target computing node is a first state value, acquiring a data processing task identifier in the first state value; and aiming at the data processing tasks which are not processed and completed, searching and selecting idle computing nodes.
Here, the state information associated with the target computing node is a first state value, indicating that the target computing node has an unprocessed completed task.
Here, if the target computing node has an unprocessed completed task, then the target computing node may be performing the target data processing task, or may have re-acquired a new task after moving the target computing node back into the idle computing node. Thus, it can be understood that, for the target computing node, the data task identifier in the first state information may be a target data processing task identifier, and may also be another data processing task identifier.
It should be noted that, by first determining the state information of the target computing node, and then when the state information is the first state value, acquiring the data processing task identifier in the first state value, and reselecting the idle computing node for executing the data processing task, which is not the processed data processing task. Therefore, the scheme can be ensured to have high reliability, the death of the computing node in the task execution can be automatically identified, and the follow-up processing such as retry or redistribution is supported.
In some application scenarios, please refer to fig. 2, which illustrates an application scenario diagram in some embodiments of the present application. In fig. 2, the task trigger system may perform step 201. The task distribution system may perform step 202, step 203, and step 204. The computing system may perform step 205, step 206, step 207, step 208, and step 209.
Step 201, receiving a task.
Step 202, calculating which calculation cluster the task is allocated to according to the rule.
Here, a compute cluster may include compute nodes.
Step 203, selecting the computing node in the idle state.
And step 204, modifying the state information corresponding to the computing nodes by using the distributed coordination system, and removing the computing nodes from the idle computing nodes.
By way of example, coordination may be performed using a distributed coordination system such as a zoo keeper/etcd.
In step 205, the assigned task is known through the status information.
Here, the state information corresponding to the computing node may be pushed by using a distributed coordination system.
Step 206, task deduplication.
Here, task deduplication may confirm whether a task has been allocated to other compute nodes.
In some application scenarios, after the assigned task is obtained, the application may be made to a database storing tasks, changing the task state; if the application fails, the task is considered to be repeatedly allocated and is not processed. Step 207, executing the task.
And 208, removing the task distribution node corresponding to the task by using the distributed coordination system.
Step 209 adds the compute node back to the set of free compute nodes.
It should be noted that the architecture shown in fig. 2 shows the flow of processing tasks from the perspective of how the system for processing tasks is built and matched. The application scenario shown in fig. 2 may be described from a different perspective than some other embodiments in the present application, but has some similar indications as to the way tasks are processed.
Continuing to refer to FIG. 3, an application scenario diagram of a task processing method according to the present disclosure is shown.
In fig. 3, a computing node a, a computing node B, and a computing node C may be taken as illustrations of a task execution system. In other words, the task execution system may include a compute node a, a compute node B, and a compute node C. Each computing node in the task execution system which is responsible for computing is mounted into a child node under an idle node when the computing node is started.
In fig. 3, when a task arrives, each instance in the task distribution system may select a container in charge of computation in the task execution subsystem in an idle state from the idle computing node, and complete two actions of distribution and deletion of the idle state in one transaction.
In fig. 3, the monitoring system continuously monitors the survival status of the task execution subsystem. When the node is found dead, operations such as task redistribution and the like are carried out to ensure the reliability of the task.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a task processing apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the task processing device of the present embodiment includes: a selecting unit 401, a removing unit 402, a calculating unit 403 and an adding unit 404. The selection unit is used for responding to a target data processing task sent by the received integrated application and selecting an idle computing node from an idle computing node set as a target computing node; a removing unit for removing the target compute node from the set of idle compute nodes; a computing unit for processing the target data processing task using the target computing node; an adding unit, configured to add the target compute node to the set of idle compute nodes in response to determining that the target compute node completes processing the target data processing task.
In this embodiment, specific processing of the selecting unit 401, the removing unit 402, the calculating unit 403, and the adding unit 404 of the task processing device and technical effects thereof can refer to related descriptions of step 101, step 102, step 103, and step 104 in the corresponding embodiment of fig. 1, which are not described herein again.
In some embodiments, the method further comprises: and isolating the computing resources in the computing server to obtain at least two computing nodes.
In some embodiments, said isolating computing resources in a compute server resulting in at least two compute nodes comprises: and isolating resources in the computing server based on a container technology to obtain at least two computing nodes.
In some embodiments, the apparatus is further configured to: in response to allocating the target data processing task to the target computing node, modifying state information associated with the target computing node to a first state value, wherein the first state value includes occupation state indication information and a target data processing task identifier.
In some embodiments, the apparatus is further configured to: in response to the target data processing task being assigned to the target computing node, requesting, to an electronic device storing a task state of the target data processing task, that the task state be modified from an original task state value to a processing task state value; and if the request modification fails, stopping processing the target data processing task.
In some embodiments, said adding said target computing node to said set of idle computing nodes in response to determining that said target computing node is finished processing said target data processing task further comprises: in response to determining that the target data processing task is completed by the target compute node, modifying state information associated with the target compute node to a second state value, wherein the second state value includes idle state indication information.
In some embodiments, the apparatus is further configured to: monitoring a life state of the target computing node, wherein the life state comprises a death state; in response to determining that the target compute node is in a dead state, determining whether the target compute node has an unprocessed completed data processing task based on state information of the target compute node.
In some embodiments, said determining whether said target computing node has an unprocessed completed data processing task based on said state information of said target computing node comprises: responding to that the state information of the target computing node is a first state value, and acquiring a data processing task identifier in the first state value, wherein the first state value comprises occupation state indication information and a target data processing task identifier; and reselecting the idle computing nodes aiming at the data processing tasks which are not processed and finished.
Referring to fig. 5, fig. 5 illustrates an exemplary system architecture to which the task processing method of one embodiment of the present disclosure may be applied.
As shown in fig. 5, the system architecture may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 501, 502, 503 may interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client application in the terminal device 501, 502, 503 may receive the instruction of the user, and complete the corresponding function according to the instruction of the user, for example, add the corresponding information in the information according to the instruction of the user.
The terminal devices 501, 502, 503 may be hardware or software. When the terminal devices 501, 502, 503 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal devices 501, 502, and 503 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 505 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal device 501, 502, 503, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal equipment 501, 502, 503.
It should be noted that the task processing method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the task processing device may be disposed in the terminal device 501, 502, 503. In addition, the task processing method provided by the embodiment of the present disclosure may also be executed by the server 505, and accordingly, a task processing device may be disposed in the server 505.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., a terminal device or a server of fig. 5) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to receiving a target data processing task sent by the integrated application, selecting an idle computing node from the idle computing node set as a target computing node; removing the target compute node from the set of idle compute nodes; processing the target data processing task using the target compute node; in response to determining that the target computing node is finished processing the target data processing task, adding the target computing node to the set of idle computing nodes.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation on the unit itself, for example, a pick unit may also be described as a "unit that picks an idle compute node as a target compute node".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A task processing method, comprising:
in response to receiving a target data processing task sent by the integrated application, selecting an idle computing node from the idle computing node set as a target computing node;
removing the target compute node from the set of idle compute nodes;
processing the target data processing task using the target compute node;
in response to determining that the target computing node is finished processing the target data processing task, adding the target computing node to the set of idle computing nodes.
2. The method of claim 1, further comprising:
and isolating the computing resources in the computing server to obtain at least two computing nodes.
3. The method of claim 2, wherein isolating computing resources in the compute server resulting in at least two compute nodes comprises:
and isolating resources in the computing server based on a container technology to obtain at least two computing nodes.
4. The method of claim 1, further comprising:
in response to allocating the target data processing task to the target computing node, modifying state information associated with the target computing node to a first state value, wherein the first state value includes occupation state indication information and a target data processing task identifier.
5. The method of claim 1, further comprising:
in response to the target data processing task being assigned to the target computing node, requesting, to an electronic device storing a task state of the target data processing task, that the task state be modified from an original task state value to a processing task state value;
and if the request modification fails, stopping processing the target data processing task.
6. The method of claim 1, wherein adding the target computing node to the set of idle computing nodes in response to determining that the target computing node is finished processing the target data processing task further comprises:
in response to determining that the target data processing task is completed by the target compute node, modifying state information associated with the target compute node to a second state value, wherein the second state value includes idle state indication information.
7. The method of claim 1, further comprising:
monitoring a life state of the target computing node, wherein the life state comprises a death state;
in response to determining that the target compute node is in a dead state, determining whether the target compute node has an unprocessed completed data processing task based on state information of the target compute node.
8. The method of claim 7, wherein determining whether the target computing node has an unprocessed completed data processing task based on the state information of the target computing node comprises:
responding to that the state information of the target computing node is a first state value, and acquiring a data processing task identifier in the first state value, wherein the first state value comprises occupation state indication information and a target data processing task identifier;
and reselecting the idle computing nodes aiming at the data processing tasks which are not processed and finished.
9. A task processing apparatus, comprising:
the selection unit is used for responding to the received target data processing task sent by the integrated application and selecting an idle computing node from the idle computing node set as a target computing node;
a removing unit for removing the target compute node from the set of idle compute nodes;
a computing unit for processing the target data processing task using the target computing node;
an adding unit, configured to add the target compute node to the set of idle compute nodes in response to determining that the target compute node completes processing the target data processing task.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202110803833.1A 2021-07-15 2021-07-15 Task processing method and device and electronic equipment Pending CN113553178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110803833.1A CN113553178A (en) 2021-07-15 2021-07-15 Task processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110803833.1A CN113553178A (en) 2021-07-15 2021-07-15 Task processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113553178A true CN113553178A (en) 2021-10-26

Family

ID=78131871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110803833.1A Pending CN113553178A (en) 2021-07-15 2021-07-15 Task processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113553178A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510329A (en) * 2022-01-21 2022-05-17 北京火山引擎科技有限公司 Method, device and equipment for determining predicted output time of task node
CN115955319A (en) * 2023-03-14 2023-04-11 季华实验室 Data set generation system
CN117764206A (en) * 2024-02-21 2024-03-26 卓世智星(天津)科技有限公司 Multi-model integration method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314401A (en) * 2018-12-12 2020-06-19 百度在线网络技术(北京)有限公司 Resource allocation method, device, system, terminal and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314401A (en) * 2018-12-12 2020-06-19 百度在线网络技术(北京)有限公司 Resource allocation method, device, system, terminal and computer readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510329A (en) * 2022-01-21 2022-05-17 北京火山引擎科技有限公司 Method, device and equipment for determining predicted output time of task node
CN114510329B (en) * 2022-01-21 2023-08-08 北京火山引擎科技有限公司 Method, device and equipment for determining estimated output time of task node
CN115955319A (en) * 2023-03-14 2023-04-11 季华实验室 Data set generation system
CN115955319B (en) * 2023-03-14 2023-06-02 季华实验室 Data set generation system
CN117764206A (en) * 2024-02-21 2024-03-26 卓世智星(天津)科技有限公司 Multi-model integration method and system

Similar Documents

Publication Publication Date Title
US11159411B2 (en) Distributed testing service
CN113553178A (en) Task processing method and device and electronic equipment
CN106657314B (en) Cross-data center data synchronization system and method
CN110851139B (en) Method and device for checking codes and electronic equipment
CN110391938B (en) Method and apparatus for deploying services
CN110781373B (en) List updating method and device, readable medium and electronic equipment
CN107172214B (en) Service node discovery method and device with load balancing function
CN111857720B (en) User interface state information generation method and device, electronic equipment and medium
CN113835992A (en) Memory leak processing method and device, electronic equipment and computer storage medium
CN110650209A (en) Method and device for realizing load balance
CN116541142A (en) Task scheduling method, device, equipment, storage medium and computer program product
CN115237589A (en) SR-IOV-based virtualization method, device and equipment
CN111338834A (en) Data storage method and device
CN111444148B (en) Data transmission method and device based on MapReduce
CN112306685A (en) Task isolation method and device, electronic equipment and computer readable medium
CN115729645A (en) Micro-service configuration method and device, electronic equipment and readable storage medium
CN111538717B (en) Data processing method, device, electronic equipment and computer readable medium
CN114625479A (en) Cloud edge collaborative application management method in edge computing and corresponding device
CN114237902A (en) Service deployment method and device, electronic equipment and computer readable medium
CN113391882A (en) Virtual machine memory management method and device, storage medium and electronic equipment
CN111538721A (en) Account processing method and device, electronic equipment and computer readable storage medium
CN111309367A (en) Method, device, medium and electronic equipment for managing service discovery
CN116820354B (en) Data storage method, data storage device and data storage system
CN115878586B (en) IPFS storage encapsulation method and device, electronic equipment and readable storage medium
CN116319322B (en) Power equipment node communication connection method, device, equipment and computer medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination