CN113377543A - Task processing system, electronic device, and storage medium - Google Patents

Task processing system, electronic device, and storage medium Download PDF

Info

Publication number
CN113377543A
CN113377543A CN202110718641.0A CN202110718641A CN113377543A CN 113377543 A CN113377543 A CN 113377543A CN 202110718641 A CN202110718641 A CN 202110718641A CN 113377543 A CN113377543 A CN 113377543A
Authority
CN
China
Prior art keywords
execution
task
data
management
management process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110718641.0A
Other languages
Chinese (zh)
Inventor
肖波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202110718641.0A priority Critical patent/CN113377543A/en
Publication of CN113377543A publication Critical patent/CN113377543A/en
Priority to KR1020227020149A priority patent/KR20230005106A/en
Priority to PCT/CN2021/125004 priority patent/WO2023273025A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The present disclosure relates to a task processing system, an electronic device, and a storage medium, the system including at least one processor for running a management process, an intermediate process, and an execution process, the management process for managing the intermediate process and the execution process during processing a preset task, the intermediate process for creating the execution process, the execution process for executing a subtask of the preset task, the management process configured to: detecting whether an intermediate process is established or not under the condition that a subtask to be executed exists; under the condition that the intermediate process is established, sending a process establishing request aiming at the subtask to the intermediate process; the intermediate process is configured to: responding to a process creation request sent by a management process, and creating an execution process; the execution process is configured to: executing the subtasks to obtain the execution result of the subtasks; and sending the execution result of the subtask to the management process. The embodiment of the disclosure can be beneficial to reducing the deadlock phenomenon generated by the process.

Description

Task processing system, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task processing system, an electronic device, and a storage medium.
Background
In the related art, a parent process of multiple threads may create a child process through a fork system function provided by an operating system. Due to the mechanism of creating the child process by the fork system function, when a thread (such as a main thread) of the parent process calls the fork system function to create the child process, only the thread is reserved, and other threads are terminated. Based on this, if a lock object is acquired in other threads of the parent process to call a program corresponding to the lock object, and before releasing the lock object, the thread creating the child process creates a child process using the fork system function, the lock object can never be released because the other threads do not exist. Therefore, any sub-process cannot successfully acquire the lock object, so that the sub-process is suspended, and the sub-process is deadlock.
Disclosure of Invention
The present disclosure proposes a task processing technical solution.
According to an aspect of the present disclosure, there is provided a task processing system, the system including at least one processor, the processor being configured to run a management process, an intermediate process and an execution process, the management process being configured to manage the intermediate process and the execution process during processing of a preset task, the intermediate process being configured to create the execution process, the execution process being configured to execute a sub-task of the preset task, the management process being configured to: detecting whether the intermediate process is established or not under the condition that the subtask to be executed exists; sending a process creation request for the subtask to the intermediate process if the intermediate process has been created; the intermediary process is configured to: responding to a process creation request sent by the management process, and creating an execution process; the execution process is configured to: executing the subtask to obtain an execution result of the subtask; and sending the execution result of the subtask to the management process.
In one possible implementation, the management process is further configured to: under the condition that the intermediate process is not created, stopping the sub-thread of the management process and cleaning resources corresponding to the sub-thread; and creating the intermediate process under the condition that the child thread is stopped and the resource is cleaned, wherein the intermediate process is placed in the background to continuously run after being created.
In one possible implementation manner, the process creation request includes a process creation parameter, where the process creation parameter is used to instruct the intermediate process to create an execution process, and the creating, by the intermediate process, an execution process in response to the process creation request sent by the management process includes: and creating the execution process according to the process creation parameters, and creating a communication channel between the management process and the execution process, wherein the communication channel is used for realizing communication between the management process and the execution process.
In one possible implementation, the subtasks include a data reading task and/or a data preprocessing task, and the management process is further configured to: and under the condition that the execution process is established, sending a data index of the preset task to the execution process so as to enable the execution process to execute the data reading task and/or the data preprocessing task, wherein the data index is used for indicating a reading batch and a reading address of data to be processed, and the data to be processed comprises at least one of images, videos, texts and voices.
In a possible implementation manner, the executing process executes the subtask to obtain an execution result of the subtask, including: under the condition of receiving the data index, executing a data reading task according to the reading batch and the reading address indicated by the data index to obtain read data; and/or, executing a data preprocessing task according to the data read by the execution process to obtain preprocessed data; wherein the execution result comprises the read data and/or the pre-processing data.
In one possible implementation, the management process is further configured to: sending an execution process termination request to the intermediate process when the execution process completes the execution of the subtask and the management process has received the execution result; the intermediary process is further configured to: and sending an execution process termination instruction to the execution process to terminate the execution process under the condition of receiving the execution process termination request.
In one possible implementation, the management process is further configured to: sending an intermediate process termination instruction to the intermediate process to terminate the intermediate process under the condition that the preset task is completed based on the execution result; in the event that the intermediate process has terminated, terminating the management process.
In one possible implementation, the intermediate process is a single-threaded process; the intermediate process creates the execution process through a fork system function; the preset task comprises at least one of a model training task, an image processing task, a video processing task, a voice recognition task and a natural language processing task.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to execute the above-described system.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described system.
In the embodiment of the disclosure, during the preset task processing period of the task processing system, a three-level process creation structure of a management process, an intermediate process and an execution process can be realized, and the execution process for executing the sub-tasks is created, wherein the execution process is created through the intermediate process, which is beneficial to ensuring the thread safety of the management process, that is, the deadlock phenomenon caused by disappearance of the thread of the running task in the management process when the execution process is created can be effectively avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram showing a process creation process according to the related art.
FIG. 2 shows a schematic diagram of a process creation process according to an embodiment of the present disclosure.
FIG. 3 shows a schematic diagram of a task processing system according to an embodiment of the present disclosure.
FIG. 4 shows a schematic diagram of a processing method of a model training task according to an embodiment of the present disclosure.
FIG. 5 shows a schematic diagram of a process creation process according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 is a schematic diagram showing a process creation process according to the related art. As shown in fig. 1, when data reading and data preprocessing tasks are executed, in the related art, a main process generally calls a fork system function directly to create a sub-process, the sub-process responds to a data reading request sent by the main process to execute the data reading and data preprocessing tasks, and sends back the read and preprocessed data to the main process, and the process may be executed in a loop until all data is read and preprocessed. As described above, this approach is prone to the deadlock phenomenon, so that the tasks (such as model training tasks) to be implemented by the main process and the sub-process as a whole are suspended and cannot be automatically recovered.
FIG. 2 shows a schematic diagram of a process creation process according to an embodiment of the present disclosure. Compared with the process creation process of the related art, in the process creation process shown in fig. 2, a main process creates an intermediate process first, where the intermediate process may be a single-threaded process; under the condition that the intermediate process is created, the main process can send a process creation request to the intermediate process, and the intermediate process responds to the process creation request and calls a fork system function to create a sub-process; wherein the process creation request may be sent multiple times, and the intermediate process may create multiple sub-processes in response to the multiple process creation requests.
According to the process creation process disclosed by the embodiment of the disclosure, a three-level process creation structure of a main process, an intermediate process and a subprocess can be realized, wherein the subprocess is created through the intermediate process, which is favorable for ensuring the thread safety of the main process, namely favorable for avoiding deadlock phenomenon caused by stopping the running thread of the main process when the subprocess is created.
Fig. 3 is a schematic diagram of a task processing system according to an embodiment of the present disclosure, where the system is applicable to an electronic device such as a terminal device or a server, and the terminal device may include: user Equipment (UE), mobile devices, User terminals, cellular phones, cordless phones, Personal Digital Assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, and the like.
The system may be configured to process a preset task, and the system may include at least one processor, where the processor may include any type of processor, such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Neural-Network Processing Unit (NPU), and the like, and the embodiments of the present disclosure are not limited thereto.
As shown in fig. 3, the system includes a processor for running a management process 101, an intermediate process 102 and an execution process 103, wherein the management process 101 is used for managing the intermediate process and the execution process during processing a preset task, the intermediate process 102 is used for creating the execution process, and the execution process 103 is used for executing a sub-task of the preset task. It should be understood that the management process 101, the intermediate process 102, and the execution process 103 may run in the same processor, or may run in different processors, respectively, and the embodiment of the present disclosure is not limited thereto.
The preset task may include at least one of a model training task, an image processing task, a video processing task, a voice recognition task, and a natural language processing task, which is not limited in this disclosure.
In one possible implementation, the management process 101 may be configured to: detecting whether an intermediate process is established or not under the condition that a subtask to be executed exists; in the case where the intermediate process has been created, a process creation request for the subtask is sent to the intermediate process.
The subtasks to be executed may refer to subtasks to be executed in the preset task. For example, if the predetermined task is a model training task, the subtasks may at least include a data reading task, a data preprocessing task, an operator calculating task, and the like.
The management process can be understood as a main process for processing the preset task, and can be used for managing an intermediate process and an execution process during the process of processing the preset task; the intermediate process can be understood as a sub-process of the management process, and can be used for creating an execution process for executing the sub-task; an executing process may be understood as a sub-process of an intermediate process that may be used to execute a sub-task of a pre-defined task.
The management process may be a multi-threaded main process, and each thread of the management process may play a different role in a preset task, or process a different task. For example, if the preset task is a model training task, the main thread of the management process may be at least used to control the start and termination of a training process in the model training task, data reading and preprocessing, an execution flow of an operator in the deep network model, an execution logic of the operator on different processors, a synchronization logic between the processors during distributed training, and the like, and the sub-thread of the management process may be used to decompose and distribute a computation task in the model training task. The embodiments of the present disclosure are not limited to managing the number of threads of a process and the role of each thread.
It should be understood that, in order to save computational resources, storage resources, and the like of the processor during the execution of the application program of the predetermined task, the process and/or the thread may be created when the process and/or the thread needs to be created. Based on the above, under the condition that the subtasks to be executed exist, whether the intermediate process is established or not can be detected firstly, and if the intermediate process is not established, the intermediate process can be established firstly by the management process; if so, the management process may send a process creation request to the intermediate process to instruct the intermediate process to create the executing process. Of course, after the application program of the preset task is started, that is, the management process is created, the management process directly creates an intermediate process to wait for the management process to send a process creation request, which is not limited in this embodiment of the present disclosure.
The detecting whether the middle process is created may be to detect whether a program corresponding to the middle process is started to run, where the program corresponding to the middle process may be a program for creating an execution process. It should be understood that whether the intermediate process is created or not may be detected by using a process detection method known in the art, and the embodiment of the present disclosure is not limited thereto.
It should be understood that the management process may send a process creation request to the intermediate process for multiple times to instruct the intermediate process to create multiple execution processes, so that the multiple execution processes may process the sub-tasks in parallel, thereby improving task processing efficiency.
In one possible implementation, the middle process 102 may be configured to: and creating the execution process in response to the process creation request sent by the management process.
The process creation request may include a process creation parameter, where the process creation parameter is used to instruct an intermediate process to create an execution process, and the process creation parameter may include at least: the name of the executing process, the calling object of the executing process (i.e. the task to be executed by the executing process), the location parameter of the calling object, etc.
The intermediate process can realize the creation of an execution process by calling a fork system function provided by an operating system; of course, any other known process creation method may also be used, for example, an execution process may also be created through a spawn series function provided by an operating system, and the embodiment of the present disclosure is not limited thereto.
Wherein the intermediate process may be a single threaded process. By the method, when the single-thread middle process establishes the execution process, the process safety of the middle process can be effectively ensured, and the deadlock phenomenon is avoided.
It should be appreciated that the management process may send a multiple process creation request to the intermediate process, which may in turn create multiple execution processes based on the multiple process creation request. For example, considering the large amount of data required in the model training process, the management process may instruct the intermediate process to create multiple execution processes to process the sub-tasks in parallel, i.e., read data and/or pre-process data in parallel, by the multiple execution processes.
In one possible implementation, the executing process 103 may be configured to: executing the subtasks to obtain the execution result of the subtasks; and sending the execution result of the subtask to the management process.
As described above, the subtasks may include at least a data reading task and/or a data preprocessing task. The executing the subtask to obtain an execution result of the subtask may include: executing a data reading task to obtain read data; and/or executing a data preprocessing task on the read data to obtain preprocessed data. It should be understood that the results of the execution may include read data and/or pre-processed data.
The sending of the execution result of the subtask to the management process may be understood as sending the read data and/or the pre-processing data to the management process, so that the management process processes the pre-processing task based on the execution result.
As described above, the executing process may include multiple, it being understood that the subtasks performed by each executing process may be different and may be the same. Each executing process can process the subtasks in parallel and send the processing results of the subtasks back to the management process.
In the embodiment of the disclosure, during the preset task processing period of the task processing system, a three-level process structure creation process of a management process, an intermediate process and an execution process can be realized, and an execution process for executing a sub-task is created, wherein the execution process is created through the intermediate process, which is beneficial to ensuring the thread safety of the management process, that is, the deadlock phenomenon caused by disappearance of a running thread in the management process when the execution process is created can be effectively avoided.
As mentioned above, when there are subtasks to be executed, there may also be instances where the intermediate process is not created, and in one possible implementation, the management process may be further configured to:
under the condition that the intermediate process is not created, stopping the sub-thread of the management process and cleaning the resources corresponding to the sub-thread; in the case of a stopped child thread and a cleared resource, an intermediate process is created, wherein the intermediate process is placed in the background after creation and runs continuously.
It should be understood that a multi-threaded main process may include a main thread and at least one sub-thread, wherein the core functions, or rather the logic of interest, of the tasks are typically executed in the main thread of the main process. For example, in the above example of the model training task, the execution of tasks such as the starting and the terminating of the training process in the model training task, the data reading and preprocessing, the execution flow of the operators in the deep network model, the execution logic of the operators on different processors, the synchronization logic between the processors in distributed training, and the like in the main thread is controlled. Therefore, in order to ensure the normal operation of the preset task, the intermediate process can be established by the main thread of the main process.
In order to avoid the deadlock phenomenon when the management process creates the intermediate process, the sub-thread of the management process can be stopped first, and the resources corresponding to the sub-thread are cleaned. By the method, the created middle process can be ensured to be safe, and the key resources and the process state of the main process can be protected.
The sub-thread of the management process may be stopped by using a thread stopping manner known in the art, for example, the management thread may stop the sub-thread of the management process by calling a pthread _ cancel () function, which is not limited in this disclosure.
Wherein the cleaning resources may, for example, comprise at least: releasing lock resources, releasing storage resources on the video memory, releasing operation resources and memory resources allocated by an operating system, and the like. The resource corresponding to the child thread may be cleared by using a resource clearing manner known in the art, for example, the resource corresponding to the child thread may be cleared by calling a thread clearing function (e.g., pthread _ clear _ push () and pthread _ clear _ pop ()), a destructor function, or a resource clearing function self-developed by a technician, which is not limited in this disclosure.
The management process can realize the creation of an intermediate process by calling a fork system function provided by an operating system; of course, any other known process creation method may be used to create the intermediate process, for example, the intermediate process may also be created through a spawn series function provided by an operating system, and the embodiment of the present disclosure is not limited thereto.
In consideration of the fact that a plurality of execution processes can be created, in order to improve the creation efficiency of the execution processes and protect the normal operation of the main process, the intermediate process can be placed in the background to continuously operate after being created so as to wait for a process creation request sent by the management process at any time and create the execution processes. In this case, running continuously in the background is understood to mean that the intermediate process does not exit or terminate during the processing of the predetermined task.
In the embodiment of the disclosure, the intermediate process can be effectively and safely created, which is beneficial to avoiding the occurrence of deadlock phenomenon.
As described above, the process creation request includes a process creation parameter, where the process creation parameter is used to instruct an intermediate process to create an execution process, and in one possible implementation, the creating, by the intermediate process, an execution process in response to the process creation request sent by the management process includes:
and creating an execution process according to the process creation parameters, and creating a communication channel between the management process and the execution process, wherein the communication channel is used for realizing communication between the management process and the execution process.
The intermediate process may create the execution process according to the process creation parameter, and reference may be made to the manner of creating the execution process in the embodiment of the present disclosure, which is not described herein again.
It can be known that, after the parent process creates the child process, a communication channel (e.g., a common PIPE) can be generated between the parent process and the child process, so as to implement communication between the parent process and the child process. In view of that, in the embodiment of the present disclosure, the management process is not a parent process that directly creates the execution process, so that after the execution process is created, the intermediate process may create a communication channel between the management process and the execution process, so as to facilitate communication between the management process and the execution process, for example, facilitate the execution process to send an execution result to the management process through the communication channel.
Wherein, inter-process Communication (IPC) technology can be adopted to realize the creation of the Communication channel between the management process and the execution process, and the IPC technology can include: a pipe (such as named pipe name _ pipe), a message queue, a shared storage, a Socket (Socket), and the like, that is, a communication channel between the management process and the execution process may be any one of a pipe, a message queue, a shared storage, and a Socket, and the embodiments of the present disclosure are not limited thereto.
It is to be understood that the communication channel is used for enabling communication between the management process and the executing process, which is to be understood as a communication channel for enabling data transmission between the management process and the executing process.
In the embodiment of the disclosure, the execution process and the communication channel can be effectively created, and efficient execution of the subtasks is facilitated.
As mentioned above, the subtasks include a data reading task and/or a data preprocessing task, and in one possible implementation, the management process may be further configured to:
and under the condition that the execution process is created, sending a data index of a preset task to the execution process so as to enable the execution process to execute a data reading task and/or a data preprocessing task, wherein the data index is used for indicating a reading batch and a reading address of data to be processed, and the data to be processed comprises at least one of images, videos, texts and voices. By the method, the data reading task and/or the data preprocessing task can be executed effectively according to the indication of the data index.
As described above, communication between the management process and the execution process may be realized through the created communication channel between the management process and the execution process, and thus, the management process may transmit the data index to the execution process through the communication channel.
It should be understood that, for the data to be processed with a large amount of data, the pre-processing can be performed in batches and read as well as in batches to improve the data processing efficiency. The read address of the data to be processed can be understood as a storage address of the data to be processed, so that the data to be processed can be read from a storage space (such as a database, a readable storage medium and the like) of the data to be processed according to the storage address of the data to be read.
The data reading task is used to read data, the data preprocessing task is used to perform data preprocessing on the read data, and the data preprocessing may at least include: normalization, regularization, data enhancement, etc., without limitation to the disclosed embodiments.
It should be understood that the subtasks may include only the data reading task or the data preprocessing task, and may also include the data reading task and the data preprocessing task data. The content of the preprocessing task can be set according to actual requirements, and the embodiment of the disclosure is not limited.
In view of the above, during the processing of the preset task, in order to ensure that when the management process needs to process a certain batch of data to be processed, the batch of data to be processed is ready to be processed by the execution process, that is, the batch of data to be processed has been read and/or has been pre-processed, the management process may send a data index to the execution process in advance according to a preset pre-reading policy when the batch of data needs to be processed, so that the execution process reads the corresponding batch of data to be processed in advance and pre-processes the data in advance.
As described above, there may be a communication channel between the parent process and the child process, the management process is a parent process of the middle process, and the middle process is a parent process of the execution process, in a possible implementation manner, the management process may further send the data index to the middle process, and then send the data index to the execution process through the middle process. It should be understood that, what way to send the data index to the executing process may be set according to actual requirements, and the embodiment of the present disclosure is not limited thereto.
As described above, the subtasks include a data reading task and/or a data preprocessing task, and the management process may send a data index to the execution process so that the execution process executes the data reading task and/or the data preprocessing task. In a possible implementation manner, the executing process executes the subtask to obtain an execution result of the subtask, including:
under the condition of receiving the data index, executing a data reading task according to the reading batch and the reading address indicated by the data index to obtain read data; and/or, executing a data preprocessing task according to the data read by the execution process to obtain preprocessed data; wherein the execution result comprises the read data and/or the pre-processing data. By the method, the subtasks can be effectively executed, so that the execution result of the subtasks is conveniently sent back to the management process.
As described above, the data reading task is used for reading data, and the data preprocessing task is used for performing data preprocessing on the read data, and the data preprocessing may include at least: normalization, regularization, data enhancement, etc., without limitation to the disclosed embodiments.
The executing the data reading task according to the reading batch and the reading address indicated by the data index to obtain the read data may include: and according to the reading address corresponding to the reading batch, executing the program code of the data reading task, reading the data from the storage space for storing the data, and obtaining the data read in each batch.
The executing the data reading task according to the data read by the executing process to obtain the preprocessed data may include: and executing the program code of the data reading task, and performing data preprocessing on the data read in each batch to obtain preprocessed data.
It should be understood that the results of the execution may include read data and/or pre-processed data. That is, the execution result may include only the read data or the preprocessed data, so that the read data or the preprocessed data may be transmitted to the management process; read data and pre-processed data may also be included so that the read data and pre-processed data may be sent to a management process, without limitation to embodiments of the present disclosure.
As described above, a communication channel is established between the management process and the execution process, and the communication channel may be used to implement data transmission between the management process and the execution process, and in one possible implementation, sending the execution result of the sub-task to the management process may include: and the execution process sends the execution result of the subtask to the management process through the communication channel. By the method, the data transmission efficiency between the management process and the execution process can be improved.
As described above, there may be communication channels between the management process and the intermediate process, and between the intermediate process and the execution process. In a possible implementation manner, the sending, by the execution process, the execution result of the subtask to the management process may further include: and the execution process sends the execution result to the intermediate process, and the intermediate process sends the execution result to the management process. In this way, a communication channel between the management process and the intermediate process can be created without additional.
It should be understood that an executing process may be created in the case of a subtask to be executed; to save computational memory resources of the processor, etc., upon completion of a subtask, the executing process may be terminated, and in one possible implementation, the management process is further configured to: sending a request for stopping the execution process to the intermediate process under the condition that the execution process executes the subtask and the management process receives the execution result;
the intermediary process is further configured to: and in the case of receiving the execution process termination request, sending an execution process termination indication to the execution process to terminate the execution process.
The execution process finishes executing the subtasks and the management process receives the execution result, which can be understood as that the execution process finishes reading the data and/or preprocesses the read data according to the indication of the data index, and the management process receives the data sent by all the execution processes. In this case, the executing process may be considered to have completed its subtasks to be executed, and the managing process may send an executing process termination request to the intermediate process to instruct the intermediate process to end the execution of the executing process.
And the intermediate process sends an execution process ending instruction to the execution process, wherein the execution process ending instruction is used for indicating the execution process to terminate the self process. That is, when receiving the instruction to end the execution process, the execution process may terminate itself by calling the exit () function provided by the operating system, that is, request the operating system to delete the execution process. It should be understood that the executing process has been deleted by the operating system, meaning that the executing process has terminated.
As mentioned above, a communication channel may be established between the management process and the execution process, and in one possible implementation, the management process may be further configured to: and under the condition that the execution process finishes executing the subtasks and the management process receives the execution result, directly sending an execution process termination instruction to the execution process so as to terminate the execution process. It should be understood that, as to the manner in which the execution process is terminated, the design may be based on practical requirements, and the embodiment of the present disclosure is not limited thereto.
Considering that the execution process is created by the intermediate process, compared to the above-mentioned manner of directly ending the execution process by the management process, ending the execution process by the intermediate process, that is, instructing the execution process to end itself by the intermediate process, is a normal and safe manner of terminating the execution process.
It should be noted that, the above triggering condition for terminating the execution process (that is, the execution process completes the execution of the sub-task and the management process has received the execution result) is an implementation manner provided in the embodiment of the present disclosure, and in fact, a person skilled in the art may set the triggering condition for terminating the execution process according to the type of the preset task, the actual requirement, and the like, for example, for the model training task, it may be set that, when each round of iterative training is finished, an execution process end request is sent to the middle process, or an execution process end instruction is directly sent to the execution process, so as to terminate the execution process; it should be appreciated that the execution process may be recreated at the beginning of the next round of iterative training, and the intermediate process may continue to run in the background all the time.
In one possible implementation, the management process is further configured to:
sending an intermediate process termination instruction to the intermediate process to terminate the intermediate process under the condition that the preset task is completed based on the execution result; in the case where the intermediate process has terminated, the management process is terminated. By the method, the intermediate process and the management process can be effectively released under the condition of finishing the preset task.
The preset task is completed based on the execution result, which may be understood as that the whole preset task is processed and completed, for example, the whole model training is completed. In this case, the manageable process may terminate the intermediate process first and then terminate the administrative process itself, thereby releasing the processes created for processing the pre-set tasks.
When receiving the intermediate process termination instruction, the intermediate process can terminate itself by calling an exit () function provided by the operating system, that is, request the operating system to delete the intermediate process. It should be understood that the operating system has deleted the intermediate process, meaning that the intermediate process has terminated.
The management process may terminate itself by calling an exit () function provided by the operating system, that is, requesting the operating system to delete the management process. It should be understood that the management process has been deleted by the operating system, meaning that the management process has terminated.
It should be noted that the above manner of terminating the execution process, the intermediate process, and the management process through the exit () function is an implementation manner provided by the embodiment of the present disclosure, and actually, a person skilled in the art may implement termination of each process according to different process termination functions provided by different operating systems (such as a Windows operating system, a Linux operating system, and the like) or a process termination manner known in the art, which is not limited to the embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating a processing method of a model training task, which is applicable to the task processing system according to an embodiment of the present disclosure, and as shown in fig. 4, the processing method includes:
in step S11, during the processing of the model training task, when performing one round (epoch) of model training, the management process (which may be referred to as a training main process) prepares to create an execution process (which may be referred to as a work (Worker) sub-process) for reading data and preprocessing data; according to a user set value, a plurality of Worker subprocesses can be created and run in parallel, so that the data reading performance is improved.
In step S12, when preparing to create the first Worker sub-process, if it is detected that the middle process is not created, the training main process will first stop the sub-thread (which may be called an Engine thread) of the training main process, clean up the key resources, and then the training main process creates the middle process using the fork system function; the middle process does not exit during the processing period of the whole model training task, and waits for a process creation request or instruction for creating a Worker subprocess, which is sent by a training main process, in a background running mode.
In step S13, the training main process sends a process creation request and a process creation parameter for creating a Worker subprocess to the intermediate process, and after receiving the process creation request, the intermediate process calls a fork system function to create the Worker subprocess by using the process creation parameter, and creates a pipeline for realizing communication between the training main process and the Worker subprocess.
In step S14, after the training host process creates all the requested Worker sub-processes, the training host process sends the data index of the data to be read to the corresponding Worker sub-processes; the method comprises the steps that a pre-reading strategy is adopted in the model training process, namely, a training main process sends a data index to a Worker subprocess in advance, so that the Worker subprocess reads data of a corresponding batch in advance and carries out preprocessing, and when the training main process needs to calculate the data of a certain batch, the data of the certain batch is processed by the Worker subprocess to the greatest extent.
In step S15, the Worker sub-process starts executing data reading and data preprocessing tasks after receiving the data index, and sends the data back to the training main process after the tasks are completed.
In step S16, when each round (epoch) of model training of the model training task is finished, the Worker subprocess is terminated, that is, the Worker subprocess is exited, and after the next round of model training is started, the Worker subprocess is re-created according to the above steps S11 to S14, but the intermediate process will always run in the background after being created.
In step S17, when the whole model training of the model training task is finished, that is, when the whole model training task is completed, the training host process sends a termination request to the intermediate process to quit the intermediate process, that is, terminate the intermediate process and then terminate the training host process itself.
FIG. 5 shows a schematic diagram of a process creation process according to an embodiment of the present disclosure. As shown in fig. 5, the process creation process includes: training a main process to create an intermediate process by calling a fork system function; the training main process sends a Worker subprocess creating request for creating the Worker subprocess for multiple times to the intermediate process; the middle process responds to the Worker subprocess creation request, and calls the fork system function to create Worker subprocesses 'Worker 0, Worker1 and Worker 2'.
In the embodiment of the disclosure, during the period that the task processing system processes the model training task, a three-level process creating structure for training a main process, an intermediate process and a Worker subprocess can be realized, and the Worker subprocess for executing the data reading and data preprocessing task is created, wherein the Worker subprocess is created through the intermediate process, which is beneficial to ensuring the thread safety of the training main process, and also can effectively avoid the phenomenon that the thread (such as an Engine thread) running the task in the training main process disappears to generate a deadlock phenomenon when the Worker subprocess is created.
It should be understood that, during the process of the model training task, the state of the management process (e.g., the training main process) may change (e.g., a child process is created), and these changes may affect the execution process (e.g., the Worker child process) that creates the management process, wherein the deadlock phenomenon may occur after the child process is created by the management process by the way the management process directly creates the execution process in the related art. According to the embodiment of the disclosure, after the process state of the intermediate process is started, the process state is not affected by the management process, so that the execution processes created by the intermediate process are all safe, and the management process is also safe.
In the related art, a management process directly creates a two-level process creation structure of an execution process, and according to the embodiment of the disclosure, a three-level process creation structure of a management process, an intermediate process and an execution process is provided, wherein the management process creates the intermediate process, and then the intermediate process creates the execution process.
According to the embodiment of the disclosure, the three-level process creating structure of the management process, the intermediate process and the execution process can effectively ensure the safety of the process state of each process, namely effectively avoid the generation of deadlock; by stopping the sub-thread of the management process and cleaning resources in time before the middle process is created, the created middle process is ensured not to have deadlock; the data reading performance can be ensured not to be affected, and the occupation amount of memory resources is reduced.
According to the embodiment of the disclosure, a process state and data protection mechanism is provided, that is, when an intermediate process is created, key resources of a main thread in a management process are properly protected, and a sub-thread stops working, so as to ensure that the created intermediate process is safe, and meanwhile, state information of the management process can be retained, for example, data cached by the management process is retained.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above described system of the specific embodiment, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an electronic device, a computer-readable storage medium, and a program, which can be used to implement any task processing system provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions of the method portions are not described again.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described system. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to execute the above-described system.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device executes the above system.
The electronic device may be provided as a terminal device, a server, or other modality of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal device.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a create button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A task processing system comprising at least one processor, said processor being configured to run a management process, an intermediate process, and an execution process, said management process being configured to manage said intermediate process and said execution process during processing of a predetermined task, said intermediate process being configured to create said execution process, said execution process being configured to execute a sub-task of said predetermined task,
the management process is configured to: detecting whether the intermediate process is established or not under the condition that the subtask to be executed exists; sending a process creation request for the subtask to the intermediate process if the intermediate process has been created;
the intermediary process is configured to: responding to a process creation request sent by the management process, and creating an execution process;
the execution process is configured to: executing the subtask to obtain an execution result of the subtask; and sending the execution result of the subtask to the management process.
2. The system of claim 1, wherein the management process is further configured to:
under the condition that the intermediate process is not created, stopping the sub-thread of the management process and cleaning resources corresponding to the sub-thread;
and creating the intermediate process under the condition that the child thread is stopped and the resource is cleaned, wherein the intermediate process is placed in the background to continuously run after being created.
3. The system according to claim 1 or 2, wherein the process creation request includes a process creation parameter for instructing the intermediate process to create an execution process,
wherein, the intermediate process responds to the process creation request sent by the management process to create an execution process, and comprises:
and creating the execution process according to the process creation parameters, and creating a communication channel between the management process and the execution process, wherein the communication channel is used for realizing communication between the management process and the execution process.
4. A system according to any of claims 1-3, wherein the subtasks include a data reading task and/or a data pre-processing task, the management process being further configured to:
and under the condition that the execution process is established, sending a data index of the preset task to the execution process so as to enable the execution process to execute the data reading task and/or the data preprocessing task, wherein the data index is used for indicating a reading batch and a reading address of data to be processed, and the data to be processed comprises at least one of images, videos, texts and voices.
5. The system according to claim 4, wherein the executing process executes the subtask to obtain an execution result of the subtask, and includes:
under the condition of receiving the data index, executing a data reading task according to the reading batch and the reading address indicated by the data index to obtain read data; and/or the presence of a gas in the gas,
executing a data preprocessing task according to the data read by the execution process to obtain preprocessed data;
wherein the execution result comprises the read data and/or the pre-processing data.
6. The system of any of claims 1-5, wherein the management process is further configured to: sending an execution process termination request to the intermediate process when the execution process completes the execution of the subtask and the management process has received the execution result;
the intermediary process is further configured to: and sending an execution process termination instruction to the execution process to terminate the execution process under the condition of receiving the execution process termination request.
7. The system of any of claims 1-6, wherein the management process is further configured to:
sending an intermediate process termination instruction to the intermediate process to terminate the intermediate process under the condition that the preset task is completed based on the execution result;
in the event that the intermediate process has terminated, terminating the management process.
8. The system according to any of claims 1-7, wherein the intermediate process is a single threaded process; the intermediate process creates the execution process through a fork system function; the preset task comprises at least one of a model training task, an image processing task, a video processing task, a voice recognition task and a natural language processing task.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to execute the system of any one of claims 1 to 8.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the system of any one of claims 1 to 8.
CN202110718641.0A 2021-06-28 2021-06-28 Task processing system, electronic device, and storage medium Pending CN113377543A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110718641.0A CN113377543A (en) 2021-06-28 2021-06-28 Task processing system, electronic device, and storage medium
KR1020227020149A KR20230005106A (en) 2021-06-28 2021-10-20 Job processing systems, electronic devices and storage media
PCT/CN2021/125004 WO2023273025A1 (en) 2021-06-28 2021-10-20 Task processing system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110718641.0A CN113377543A (en) 2021-06-28 2021-06-28 Task processing system, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113377543A true CN113377543A (en) 2021-09-10

Family

ID=77579447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110718641.0A Pending CN113377543A (en) 2021-06-28 2021-06-28 Task processing system, electronic device, and storage medium

Country Status (3)

Country Link
KR (1) KR20230005106A (en)
CN (1) CN113377543A (en)
WO (1) WO2023273025A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114194205A (en) * 2021-12-03 2022-03-18 广州小鹏汽车科技有限公司 Vehicle control method based on Bluetooth process, vehicle and storage medium
WO2023273025A1 (en) * 2021-06-28 2023-01-05 上海商汤科技开发有限公司 Task processing system, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102388370A (en) * 2009-06-19 2012-03-21 核心科技有限公司 Computer process management
CN104331327A (en) * 2014-12-02 2015-02-04 山东乾云启创信息科技有限公司 Optimization method and optimization system for task scheduling in large-scale virtualization environment
US20150150142A1 (en) * 2013-10-23 2015-05-28 Avecto Limited Computer device and method for isolating untrusted content
CN105335171A (en) * 2014-06-24 2016-02-17 北京奇虎科技有限公司 Method and device for long residence of application program in background of operating system
CN107133086A (en) * 2016-02-29 2017-09-05 阿里巴巴集团控股有限公司 Task processing method, device and system based on distributed system
CN108121594A (en) * 2016-11-29 2018-06-05 阿里巴巴集团控股有限公司 A kind of process management method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019297B2 (en) * 2013-04-03 2018-07-10 Salesforce.Com, Inc. Systems and methods for implementing bulk handling in asynchronous processing
JP6336090B2 (en) * 2014-01-02 2018-06-06 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Method and apparatus for maintaining data for online analytical processing in a database system
US10289446B1 (en) * 2015-09-15 2019-05-14 Amazon Technologies, Inc. Preserving web browser child processes by substituting a parent process with a stub process
CN112527403B (en) * 2019-09-19 2022-07-05 荣耀终端有限公司 Application starting method and electronic equipment
CN111414256B (en) * 2020-03-27 2022-10-04 中国人民解放军国防科技大学 Application program process derivation method, system and medium based on kylin mobile operating system
CN113377543A (en) * 2021-06-28 2021-09-10 上海商汤科技开发有限公司 Task processing system, electronic device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102388370A (en) * 2009-06-19 2012-03-21 核心科技有限公司 Computer process management
US20150150142A1 (en) * 2013-10-23 2015-05-28 Avecto Limited Computer device and method for isolating untrusted content
CN105335171A (en) * 2014-06-24 2016-02-17 北京奇虎科技有限公司 Method and device for long residence of application program in background of operating system
CN104331327A (en) * 2014-12-02 2015-02-04 山东乾云启创信息科技有限公司 Optimization method and optimization system for task scheduling in large-scale virtualization environment
CN107133086A (en) * 2016-02-29 2017-09-05 阿里巴巴集团控股有限公司 Task processing method, device and system based on distributed system
CN108121594A (en) * 2016-11-29 2018-06-05 阿里巴巴集团控股有限公司 A kind of process management method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273025A1 (en) * 2021-06-28 2023-01-05 上海商汤科技开发有限公司 Task processing system, electronic device, and storage medium
CN114194205A (en) * 2021-12-03 2022-03-18 广州小鹏汽车科技有限公司 Vehicle control method based on Bluetooth process, vehicle and storage medium
CN114194205B (en) * 2021-12-03 2024-01-09 广州小鹏汽车科技有限公司 Vehicle control method based on Bluetooth process, vehicle and storage medium

Also Published As

Publication number Publication date
KR20230005106A (en) 2023-01-09
WO2023273025A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US20200218550A1 (en) Android-based pop-up prompt method and device
CN108182131B (en) Method and device for monitoring application running state, storage medium and electronic equipment
CN109308241B (en) Method and device for monitoring starting process of application program, terminal equipment and storage medium
CN113377543A (en) Task processing system, electronic device, and storage medium
CN111610912B (en) Application display method, application display device and storage medium
CN113360807B (en) Page display method and device of mixed-mode mobile application and related equipment
CN114138439A (en) Task scheduling method and device, electronic equipment and storage medium
EP3232325A1 (en) Method and device for starting application interface
EP3236355B1 (en) Method and apparatus for managing task of instant messaging application
CN110888683B (en) Performance optimization method and device of operating system and readable medium
CN115576645B (en) Virtual processor scheduling method and device, storage medium and electronic equipment
CN115543535B (en) Android container system, android container construction method and device and electronic equipment
CN114153582A (en) Resource scheduling method and device, electronic equipment and storage medium
CN112764563A (en) Multi-screen control method, device and system, electronic equipment and storage medium
EP3073371A1 (en) Method and device for loading theme application
CN106991018B (en) Interface skin changing method and device
CN113312103A (en) Software definition method and device for intelligent camera, electronic equipment and storage medium
CN112925788A (en) Data set management method, system, device, electronic equipment and storage medium
CN113420338A (en) Data processing method and device and data processing device
CN111694571B (en) Compiling method and device
CN112102300A (en) Counting method and device, electronic equipment and storage medium
EP3835945A1 (en) Method and device for processing an application
CN113360254A (en) Task scheduling method and system
CN113253847A (en) Terminal control method and device, terminal and storage medium
CN107145441B (en) Page display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051279

Country of ref document: HK