CN110888722A - Task processing method and device, electronic equipment and computer readable storage medium - Google Patents

Task processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110888722A
CN110888722A CN201911118848.3A CN201911118848A CN110888722A CN 110888722 A CN110888722 A CN 110888722A CN 201911118848 A CN201911118848 A CN 201911118848A CN 110888722 A CN110888722 A CN 110888722A
Authority
CN
China
Prior art keywords
container
subtask
cluster
processed
image file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911118848.3A
Other languages
Chinese (zh)
Other versions
CN110888722B (en
Inventor
陆瀛海
董峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911118848.3A priority Critical patent/CN110888722B/en
Publication of CN110888722A publication Critical patent/CN110888722A/en
Application granted granted Critical
Publication of CN110888722B publication Critical patent/CN110888722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The embodiment of the invention provides a task processing method, a task processing device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: if a target scheduling command sent by a second server is received, acquiring a file to be processed and a subtask cluster; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster; creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized. When the embodiment of the invention processes the task formed by combining a plurality of subtasks, resources can be saved, and the task processing efficiency can be improved.

Description

Task processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of cloud computing technologies, and in particular, to a task processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, many Artificial Intelligence (AI) inference models are created, which are created based on a single task, for example, an AI inference model for text detection can only process a task for text detection, and an AI inference model for image recognition can only process a task for image recognition. If a task composed of multiple tasks is to be processed, for example, a detection task for processing a multimedia file with text and images, a new AI inference model needs to be created again, for example, the content of the multimedia file is trained from the beginning to obtain an end-to-end AI inference model, which is not only inefficient, but also wastes resources.
In view of the above drawbacks, the AI inference model may be deployed in a specific manner, for example, different AI inference models may be deployed in different web services, and if the detection task of the multimedia file includes multiple tasks, the multiple web services may be invoked. However, upon invocation, this multimedia file needs to be repeatedly downloaded based on these web services; moreover, many repetitive works, such as data preparation and vectorization, are required.
Therefore, the task formed by combining a plurality of tasks is processed by adopting a mode of deploying different AI inference models to different web services, and the problems of resource waste and low task processing efficiency exist in the prior art.
Disclosure of Invention
Embodiments of the present invention provide a task processing method, a task processing device, an electronic device, and a computer-readable storage medium, so as to achieve the purposes of saving resources and improving task processing efficiency when processing a task formed by combining multiple sub-tasks. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a task processing method applied to a first server, where the method includes:
if a target scheduling command sent by a second server is received, acquiring a file to be processed and a subtask cluster; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
In a second aspect of the present invention, there is also provided a task processing method applied to a second server, where the method includes:
acquiring a subtask cluster of a to-be-processed task for a to-be-processed file, wherein the subtask cluster comprises at least two related subtasks of the to-be-processed task;
generating a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, the container combination comprises a container corresponding to each subtask in the subtask cluster, a mirror image file of an inference model for realizing the subtask is contained in the container, and if the mirror image file of each container in the container combination is operated, task processing can be performed on the file to be processed;
and sending the target scheduling command to the first server.
In a third aspect of the present invention, there is further provided a task processing method applied to a first server, where the method includes:
acquiring a container combination corresponding to a subtask cluster of a file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
determining the processing sequence of the image file of each container in the container combination according to the execution sequence of each subtask in the subtask cluster;
and based on the processing sequence, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed.
In a fourth aspect of the present invention, there is also provided a task processing apparatus applied to a first server, the apparatus including:
the first acquisition module is used for acquiring the file to be processed and the subtask cluster if a target scheduling command sent by the second server is received; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
the creating module is used for creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
In a fifth aspect of the present invention, there is also provided a task processing apparatus applied to a second server, the apparatus including:
the second acquisition module is used for acquiring a subtask cluster of a to-be-processed task for the to-be-processed file, wherein the subtask cluster comprises at least two related subtasks of the to-be-processed task;
the generating module is used for generating a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, wherein the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file of an inference model for realizing the subtask is contained in the container; if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized;
a sending module, configured to send the target scheduling command to the first server.
In a sixth aspect of the present invention, there is also provided a task processing apparatus applied to a first server, the apparatus including:
the third acquisition module is used for acquiring a container combination corresponding to the subtask cluster of the file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
the determining module is used for determining the processing sequence of the image file of each container in the container combination according to the execution sequence of each subtask in the subtask cluster;
and the task processing module is used for sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed based on the processing sequence.
In another aspect of the present invention, there is also provided a computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to execute any one of the task processing methods described in the first server side; or executing any one of the task processing methods of the second server side.
In another aspect of the present invention, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute the task processing method described in any one of the first server sides.
In another aspect of the present invention, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute any of the task processing methods described above on the second server side.
According to the task processing method, the task processing device, the electronic device and the computer readable storage medium provided by the embodiment of the invention, the first server is scheduled to create the container combination for the subtask cluster through the target scheduling command generated by the second server based on the subtask cluster of the to-be-processed task, and if the first server runs the mirror image file of each container in the container combination, the first server can realize task processing on the to-be-processed file.
In the embodiment of the invention, the image files for realizing the inference model of each subtask in the subtask cluster are dispatched to the same server to run in a container combination mode, so that the combination model of the inference model can be constructed, and the inference models for realizing each subtask in the subtask cluster are combined for use. Due to the fact that the files to be processed are required to be downloaded only once for the combined model, the files to be processed can be prevented from being required to be downloaded repeatedly for each inference model, and repeated preprocessing work such as data preparation, vectorization and the like can be avoided, so that the files to be processed can be downloaded and preprocessed once, the inference models can be used, resources can be saved, and task processing efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic diagram illustrating an interaction flow of a task processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a detailed process of step 102 of the task processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a task processing method according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating a task processing method according to an embodiment of the present invention;
FIG. 5 is a third flowchart illustrating a task processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a container assembly after the container assembly is arranged according to an embodiment of the present invention;
FIG. 7 is a second schematic view of the container assembly after the container arrangement according to the embodiment of the present invention;
FIG. 8 is a fourth flowchart illustrating a task processing method according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a detailed process of step 803 of the task processing method according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an exemplary structure of a task processing device according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a detailed structure of a creating module of a task processing device according to an embodiment of the present invention;
FIG. 12 is a second schematic diagram illustrating a task processing device according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a detailed structure of a task processing device seed generation module according to an embodiment of the present invention;
FIG. 14 is a third exemplary diagram of a task processing device according to an embodiment of the present invention;
FIG. 15 is a schematic diagram illustrating a detailed structure of task processing modules of the task processing device according to an embodiment of the present invention;
FIG. 16 is a diagram illustrating an electronic device according to an embodiment of the invention;
fig. 17 is a second schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
First, a task processing method provided in an embodiment of the present invention is described based on a task processing system.
It should be noted that the task processing method provided by the embodiment of the present invention may be applied to a task processing system. The task processing system comprises a first server and a second server, wherein the first server can be a processing server in a server cluster and is used for receiving a target scheduling command sent by the second server and responding to the target scheduling command to create a container combination for a subtask cluster of a to-be-processed task of a to-be-processed file, and meanwhile, if the first server operates a mirror image file of each container in the container combination, the task processing of the to-be-processed file can be realized.
The second server may be a master server in the server cluster, and is configured to send a target scheduling command to the first server, so as to schedule the first server to create a container combination, so as to perform task processing on the file to be processed.
Referring to fig. 1, an interaction flow diagram of a task processing method in an embodiment of the present invention is shown. As shown in fig. 1, the method may include the steps of:
step 101, the second server obtains a subtask cluster of a to-be-processed task for a to-be-processed file.
Wherein the subtask cluster includes at least two related subtasks of the to-be-processed task.
And 102, the second server generates a target scheduling command based on the subtask cluster.
The target scheduling command is used for instructing the first server to create a container combination for the subtask cluster, the container combination includes a container corresponding to each subtask in the subtask cluster, a mirror image file of an inference model for realizing the subtask is contained in the container, and if the mirror image file of each container in the container combination is operated, task processing can be performed on the file to be processed.
And 103, the second server sends the target scheduling command to the first server.
And step 104, if the first server receives the target scheduling command sent by the second server, acquiring the file to be processed and the subtask cluster.
105, the first server creates a container combination for the subtask cluster; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
In step 101, when a terminal triggers execution of a to-be-processed task based on a to-be-processed file, the terminal sends the to-be-processed file and the to-be-processed task for the to-be-processed file to a server cluster, a second server in the server cluster receives the to-be-processed task, and schedules a first server to create a container combination based on the to-be-processed task to perform task processing on the to-be-processed file.
Specifically, when receiving the task to be processed, the second server may query a preset configuration file to obtain a subtask cluster of the task to be processed. More specifically, the subtask cluster of the to-be-processed task may be related to the type of the to-be-processed file and the attribute of the to-be-processed task, and for the same to-be-processed file, if the attribute of the to-be-processed task is different, the subtask cluster thereof may also be different, and for the attribute of the same to-be-processed task, if the attribute of the to-be-processed file is different, the subtask cluster thereof may also be different.
For example, for the same document to be processed, the attribute of the task to be processed is the identification of a specific suspect and the attribute of the task to be processed is the identification of the blood-smell violence content, and the subtask clusters are completely different. For another example, for the same to-be-processed task, the type of the to-be-processed file is a video and the type of the to-be-processed file is a text, and the subtask clusters thereof are also different.
Therefore, when the second server acquires the subtask cluster, the steps may include:
acquiring the type of a file to be processed and a task to be processed aiming at the file to be processed;
and inquiring and acquiring the subtask cluster of the task to be processed from a preset configuration file based on the type of the file to be processed and the attribute of the task to be processed.
The configuration file may be generated by pre-configuring a mapping relationship between the type of the file to be processed and the attribute of the task to be processed and the sub-task cluster, where the configuration file includes a mapping table based on the type of the file to be processed and the attribute of the task to be processed and the sub-task cluster, for example, for the type a of the file to be processed and the attribute B of the task to be processed, the type a and the attribute B in the mapping table correspond to the sub-task cluster C. Therefore, based on the type of the file to be processed and the attribute of the task to be processed, the subtask cluster of the task to be processed can be obtained by inquiring from the configuration file.
It should be noted that, the sub-task correlation in the sub-task cluster refers to that each sub-task is related in the execution order. The execution sequence may be understood as a time-based sequence for execution, for example, the subtask cluster includes a subtask a and a subtask B, and the subtask B may depend on data of the subtask a during execution, that is, the subtask a is executed first and then the subtask B is executed, or although the subtask a and the subtask B do not depend on respective data, the subtask B may only be started to execute after the execution of the subtask a is completed.
In step 102, the target scheduling command may carry task information, where the task information is used to indicate a task that needs to be obtained by the first server, for example, a subtask cluster of a task to be processed is obtained based on the task information.
The task information may include a service identifier of each subtask in the subtask cluster, and in practical applications, the task information may include a plurality of task parameters, each task parameter may be a service identifier of a subtask, and accordingly, the first server may obtain each subtask based on the service identifier of each subtask, combine each subtask to obtain the subtask cluster, and obtain a mirror image file for implementing an inference model of each subtask based on the service identifier of each subtask.
The task information may also include identification information of an image file of an inference model for implementing each subtask in the subtask cluster, and in practical applications, the task information may also include a plurality of task parameters, each task parameter may be identification information of an image file of an inference model for implementing one subtask, and correspondingly, the first server may directly obtain an image file of an inference model for implementing each subtask based on identification information of an image file of an inference model for implementing each subtask.
The target scheduling command may also carry scheduling information for instructing the first server to create a container combination for the subtask cluster. In practical applications, the scheduling information may include a plurality of scheduling parameters, each scheduling parameter may be a container identifier corresponding to a subtask, and correspondingly, the first server may create a container corresponding to the container identifier corresponding to each subtask based on the container identifier corresponding to each subtask, so as to create a container combination.
The target scheduling command may carry task information and scheduling information of a subtask cluster by combining configuration information, taking the task information including identification information of an image file as an example, refer to fig. 2, which shows a detailed flowchart diagram of step 102 of a task processing method in an embodiment of the present invention, and as shown in fig. 2, the step 102 specifically includes the following steps:
step 1021, determining identification information of the image file of the inference model for realizing each subtask in the subtask cluster.
And 1022, performing container configuration on the image file for realizing the inference model of each subtask, and obtaining container configuration information corresponding to each subtask.
And the container configuration information corresponding to the subtasks is used for indicating that containers are created for the subtasks.
And 1023, combining the container configuration information corresponding to each subtask to generate a target scheduling command comprising the combined configuration information.
And the combination configuration information is used for indicating that a container combination is created for the container corresponding to each subtask.
In step 1021, the second server may obtain a service identifier of each subtask in the subtask cluster, and determine identifier information of an image file that implements an inference model of each subtask in the subtask cluster based on the service identifier of each subtask in the subtask cluster. Specifically, the second server may store a mapping table of service identifiers of subtasks and identifier information of image files of inference models for implementing the subtasks, and based on the mapping table, the identifier information of image files of inference models for implementing each subtask may be queried and determined based on the service identifiers of the subtasks in the subtask cluster.
In step 1022, for each of the subtasks, the container configuration information corresponding to the subtask includes a task parameter and a scheduling parameter of the subtask, where the task parameter includes identification information of an image file that implements an inference model of the subtask, and the scheduling parameter includes a container identification that accommodates the image file. For example, the container configuration information corresponding to the subtask a includes an identifier a1 and an identifier B1, where the identifier a1 is a task parameter of the subtask a, specifically an identifier of an image file of an inference model for implementing the subtask a, and the identifier B1 is a scheduling parameter of the subtask a, specifically a container identifier containing the image file corresponding to the identifier a 1.
In step 1023, the combined configuration information includes container configuration information of each subtask in the subtask cluster, for example, for a subtask cluster, the subtask cluster includes subtask a, subtask B, and subtask C, the container configuration information corresponding to the subtask a includes identifier a1 and identifier B1, the container configuration information corresponding to the subtask B includes identifier a2 and identifier B2, and the container configuration information corresponding to the subtask C includes identifier A3 and identifier B3, so that the combined configuration information may be represented as { { identifier a1, identifier B1 }; { identifier a2, identifier B2 }; { identification a3, identification B3} }.
It should be noted that, when the data formats of the inference models corresponding to two adjacent subtasks in the subtask cluster are not completely consistent, the combined configuration information may further include a parameter for indicating to create a target container, for example, a target container whose container identifier is the identifier D, where the target container is used for data format conversion. The second server may schedule the target container to be disposed between a first container and a second container in a container combination, where the first container and the second container are containers of subtasks corresponding to two adjacent inference models whose data formats in the subtask cluster are not matched.
For example, the first container corresponds to the container of the identifier B1, the second container corresponds to the container of the identifier B2, and the combined configuration information may be represented as { { identifier a1, identifier B1 }; { identification D }; { identifier a2, identifier B2 }; { identification a3, identification B3} }.
In addition, the second server may further obtain an execution order of each subtask in the subtask cluster, where the second server may obtain the execution order of each subtask in the subtask cluster based on the arrangement order of each subtask in the subtask cluster, or a configuration file for obtaining the subtask cluster includes related information indicating the execution order of each subtask in the subtask cluster. Correspondingly, the second server acquires the relevant information, namely the execution sequence of each subtask in the subtask cluster can be acquired.
In this way, the second server may arrange the container configuration information corresponding to each subtask in the combined configuration information based on the execution order, that is, the order of the container configuration information corresponding to each subtask in the combined configuration information is the execution order of each subtask. The second server may arrange the container configuration information corresponding to each subtask in the combined configuration information in an information configuration manner, where the container configuration information corresponding to the first executed subtask is set before, and the container configuration information corresponding to the second executed subtask is set after. Correspondingly, the first server may obtain an execution order of each subtask according to a sorting order of the container configuration information corresponding to each subtask in the combined configuration information.
For example, for a subtask cluster including subtask a, subtask B, and subtask C, and the execution order of the subtask a- > subtask C, the arranged combined configuration information may be represented as { { identifier a2, identifier B2 }; { identifier a1, identifier B1 }; { identification a3, identification B3} }.
In addition, the target scheduling command may further include file information, where the file information is used to indicate a file to be processed by the first server, and the file information may include a name and a download address of the file to be processed, so that the first server may download the file to be processed based on the file information.
In step 103, before the second server transmits the scheduling command, the resource, the processing capacity, and the like of each processing server are analyzed, and the scheduling command is transmitted to the processing server with sufficient resources and processing capacity.
Furthermore, the types of the scheduling command sent by the second server may include two types, the first type is used for scheduling the first server to process a single sub-task of the to-be-processed task, and the second type is used for scheduling the first server to uniformly process a plurality of sub-tasks of the to-be-processed task. The target scheduling command mentioned in the embodiment of the present invention is a second type of scheduling command.
After generating the target scheduling command, the second server sends the target scheduling command to the first server, so that the first server creates a container combination according to the target scheduling command.
It should be noted that the subtask cluster may include all subtasks of the to-be-processed task, or may include partial subtasks of the to-be-processed task, and when the partial subtasks are partial subtasks, in order to completely execute the to-be-processed task, the second server may schedule other subtasks, except the subtask cluster, in the to-be-processed task to other processing servers for processing.
When other subtasks are scheduled, the first type of scheduling command can be sent, and the second type of scheduling command can also be sent according to the actual scheduling situation. Preferably, in order to save time and resources to the maximum extent, the subtask cluster may include all subtasks of the to-be-processed task, so that the to-be-processed task can be processed only by downloading the to-be-processed file once and preprocessing once.
In practical applications, the second server may generate a target scheduling command through a kubernets tool deployed by the second server, and send the target scheduling command to the first server, and accordingly, the first server may create a container combination in response to relevant information in the target scheduling command, so that the second server may schedule the first server through the kubernets tool to create the container combination.
The second server can schedule the first server to generate a container combination in a Pod mode, the Pod can comprise containers corresponding to at least two subtasks, the minimum scheduling unit of the kubernets tool is agreed by a protocol, the Pod scheduled by the kubernets tool can transmit input/output data of each container through an interprocess communication mode inside the Pod, and the pods transmit data through names or port numbers. That is to say, the target scheduling command generated by the kubernets tool carries the combined configuration information with the configuration information of Pod, and accordingly, after receiving the target scheduling command, the first server analyzes the configuration information of Pod, so as to create Pod for the container corresponding to each subtask.
The combined configuration information in the target scheduling command may be configuration information of one Pod or multiple pods. Specifically, the target scheduling command may further include combination information, where the combination information is used to indicate how many pods are created for the subtask cluster by the first server, and correspondingly, a container combination created by the first server in response to the target scheduling command may include at least one Pod, and the first server may instantiate an image file of an inference model of each subtask in the subtask cluster to obtain a corresponding container, and combine the containers into one or more pods.
It should be noted that, if only configuration information of one Pod is included in the target scheduling command, the configuration information of the Pod may include first data transfer information, where the first data transfer information is used to indicate a data transfer direction of each container in the Pod. For example, the first data transfer information is container- > identification B2 corresponding to container- > identification B3 corresponding to container corresponding to identification B1, that is, based on identification B2, output data of the container corresponding to identification B1 is transferred to the container corresponding to identification B2 in an inter-process communication manner, and based on identification B3, output data of the container corresponding to identification B2 is transferred to the container corresponding to identification B3 in an inter-process communication manner.
If the target scheduling command includes configuration information of at least two Pod, the configuration information of each Pod not only needs to include first data transfer information of the Pod, but also needs to include second data transfer information in the combined configuration information, and the second data transfer information is used for indicating a data transfer direction between the pods. For example, the second data transfer information is Pod1- > Pod2, that is, the output data of Pod1 may be transferred to the corresponding Pod2 through the port number of Pod1 based on the name or port number of Pod2, and correspondingly, Pod2 receives the data transferred by Pod1 through the port number of Pod 2.
In practical applications, for example, the combined configuration information may be expressed as { { configuration information of Pod1 }; { configuration information of Pod2 }; second data transfer information }, wherein the configuration information of Pod1 may be represented as { { identifier a1, identifier B1 }; { identifier a2, identifier B2 }; first data transfer information }.
In step 104, the target scheduling command may be used as only one trigger command, and when the first server receives the target scheduling command sent by the second server, the first server is triggered to perform the subsequent steps; the first server may also perform subsequent steps in response to the combined configuration information after receiving the target scheduling command. In the following embodiments, the following steps executed in response to the combined configuration information after the first server receives the target scheduling command will be described in detail as an example.
Specifically, the first server may respond to task information of the combined configuration information in the target scheduling command, where the task information may include identification information of the image file that implements the inference model of each subtask or a service identifier of each subtask, and thus, a subtask cluster of the to-be-processed task may be obtained based on the identification information of the image file that implements the inference model of each subtask or the service identifier of each subtask.
The first server may further respond to file information in the target scheduling command, where the file information may include a name and a download address of the file to be processed, and the first server may download the file to be processed corresponding to the name based on the download address.
Further, the first server may further respond to the combined configuration information in the target scheduling command, and obtain an arrangement order of the container configuration information corresponding to each subtask in the combined configuration information, so as to obtain an execution order of each subtask, and then subsequently, after the container combination is created, may perform task processing on the file to be processed based on the execution order of each subtask.
In step 105, the first server may create a container combination for the subtask cluster in response to the combination configuration information in the target scheduling command, where a container in the container combination is a running state of an image file of the inference model that implements the subtask, that is, each container in the container combination is a process, and if the image file of the inference model that implements the subtask is referred to as a service, the containerization of the image file is a process that starts the service. The containers do not interfere with each other in the operation process, and the container combination can transmit input/output data in the containers in a process communication mode.
It should be noted that, when the first server creates a container combination for the subtask cluster, if the number of subtasks in the subtask cluster is greater than or equal to a preset task threshold (for example, greater than or equal to 4), when multiple pods are created according to an actual situation, such as the combination information in a target scheduling command, when the container combination is created for the subtask cluster, multiple pods may be created, where Internet Protocol (IP) addresses of the pods are the same, but names and port numbers allocated to the pods by the first server may be different.
For example, if the subtask cluster includes 4 subtasks, which are subtask a, subtask B, subtask C, and subtask D, respectively, and the execution order of the subtasks is a- > B- > C- > D, the first server may create two Pod for the subtask cluster, create one Pod for the subtask a and the subtask B, which is named Pod1, and create one Pod for the subtask C and the subtask D, which is named Pod2, according to the target scheduling command. Wherein, the port numbers configured by the Pod1 and the Pod2 are different.
And the first server responds to the target scheduling command, and after the container combination is established, if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized. The first server may actively run the image file of each container in the container combination, for example, after the container combination is created, the first server directly starts to run the image file of each container in the container combination. The first server may also be triggered to run the image file of each container in the container set, for example, when a time is reached or a run command is received, the first server starts to run the image file of each container in the container set. There is no specific limitation on what the mirror file for each container in the container combination is started to run.
When the first server starts to run the image file of each container in the container combination, determining a processing sequence of the image file of each container in the container combination based on the acquired execution sequence of each subtask in the subtask cluster; and based on the processing sequence, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed.
In the task processing method provided by this embodiment, the second server schedules the image file for implementing the inference model of each subtask in the subtask cluster to the same first server in a container combination manner, and accordingly, the first server constructs a container combination in a container combination manner based on the image file for implementing the inference model of each subtask in the subtask cluster. Therefore, a combined model of the inference model can be constructed, the inference models of all subtasks in the subtask cluster can be combined for use, and further, the task processing of the file to be processed can be realized under the condition that the mirror image file of each container in the container combination is operated on the basis of the combined model of the inference model.
Because the combined model of the constructed inference model is on the same server, the file to be processed downloaded by the server and each inference model in the combined model of the constructed inference model can be applied, the file to be processed only needs to be downloaded once aiming at the combined model, meanwhile, only one preprocessing work needs to be carried out on the file to be processed, each inference model in the combined model of the constructed inference model can multiplex data, and thus, one downloading and one preprocessing of the file to be processed can be realized, and a plurality of inference models can be used. Compared with the traditional task processing scheme, the embodiment of the invention has the advantages that the reasoning models are deployed on different servers, the problem that the servers need to download the files to be processed aiming at the respective deployed reasoning models can be avoided, and the problem that the servers do repeated preprocessing work such as data preparation, vectorization and the like aiming at the respective downloaded files to be processed can also be avoided, so that the resources can be saved, and the task processing efficiency can be improved.
The foregoing embodiment describes in detail an implementation process in which the second server schedules the first server to create the container combination, so as to implement configuration preparation for performing task processing on the file to be processed by the first server, and as for the implementation process in which the first server performs task processing on the file to be processed based on the container combination, details will be described later.
The task processing method provided by the embodiment of the invention is described below based on the second server and the first server, respectively.
Referring to fig. 3, a flowchart of a task processing method applied to a second server according to an embodiment of the present invention is shown. As shown in fig. 3, the method may include the steps of:
step 301, a subtask cluster of a to-be-processed task for a to-be-processed file is obtained.
Wherein the subtask cluster includes at least two related subtasks of the to-be-processed task.
Step 302, generating a target scheduling command based on the subtask cluster.
The target scheduling command is used for instructing the first server to create a container combination for the subtask cluster, the container combination includes a container corresponding to each subtask in the subtask cluster, a mirror image file of an inference model for realizing the subtask is contained in the container, and if the mirror image file of each container in the container combination is operated, task processing can be performed on the file to be processed.
Step 303, sending the target scheduling command to the first server.
Step 301 is similar to step 101 of the first embodiment, step 302 is similar to step 102 of the first embodiment, and step 303 is similar to step 103 of the first embodiment, so that the explanation thereof may refer to step 101 to step 103 of the first embodiment, which will not be repeated herein.
In the task processing method provided in this embodiment, the second server generates a target scheduling command based on the sub-task cluster, and sends the target scheduling command to the first server. The target scheduling command is used for instructing the first server to create a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and an image file of an inference model of the subtask is contained in the container. Therefore, the second server can schedule the first server to create the container combination for the subtask cluster through the target scheduling command, so that one-time downloading of the file to be processed can be indirectly realized, a plurality of services can be used, repeated downloading of the file to be processed by the plurality of services is avoided, and a plurality of repeated work such as data preparation, vectorization and the like can also be avoided, so that resources can be saved, and the task processing efficiency is improved.
Referring to fig. 4, a second flowchart of a task processing method applied to a first server in an embodiment of the present invention is shown. As shown in fig. 4, the method may include the steps of:
step 401, if a target scheduling command sent by the second server is received, acquiring a file to be processed and a subtask cluster.
The subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster.
Step 402, creating a container combination for the subtask cluster; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
The container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains an image file for realizing an inference model of the subtask.
Step 401 is similar to step 104 of the first embodiment, and step 402 is similar to step 105 of the first embodiment, so that the explanation thereof may refer to step 104 to step 105 of the first embodiment, which will not be repeated herein.
It should be noted that, in the created container combination, if only one Pod is included, the container combination is the Pod, after the image file of each container in the Pod is started to run, the containers in the Pod may communicate through processes, and when the image file of each container in the Pod created based on the subtask cluster is run, data generated in the running process may be directly transmitted between the containers through process communication on the first server according to the first data transmission information in the configuration information of the Pod.
If the container combination comprises a plurality of Pod, such as the Pod1 and the Pod2, when the image file of each container in the container combination is operated to perform task processing on the file to be processed.
Specifically, first, according to the second data transfer information in the combined configuration information, the service in Pod1 is started, and each container in Pod1 transfers data between services through interprocess communication according to the first data transfer information in the configuration information of Pod 1.
Then, after each service in Pod1 is completed, the output data of Pod1 is delivered to the corresponding Pod2 through the port number of Pod1 based on the name or port number of Pod 2.
Finally, correspondingly, the Pod2 receives the data transferred by the Pod1 through the port number of the Pod2, then starts the service in the Pod2, and each container in the Pod2 transfers information according to the first data in the configuration information of the Pod2 and transfers data between services through interprocess communication, so that the image file of each container in the container combination can be operated to realize task processing on the file to be processed.
It should be noted that, if the subtask cluster includes all subtasks of the to-be-processed task, the first server runs the image file of each container in the container combination, that is, the to-be-processed task can be processed and completed, and if the subtask cluster includes only a part of subtasks of the to-be-processed task, in order to ensure that the server cluster can complete the to-be-processed task, the second server is further required to perform scheduling, so as to implement data communication between the processing servers that process the subtasks of the to-be-processed task.
In the task processing method provided by the embodiment of the present invention, if the first server receives a target scheduling command generated by the second server based on a subtask cluster of a to-be-processed task, the first server responds to the target scheduling command to create a container combination for the subtask cluster, and if the first server runs an image file of each container in the container combination, task processing can be performed on the to-be-processed file.
Because the image file of each container in the container combination is a service, and the containers in the container combination are communicated through processes, when the image file of each container in the container combination established based on the subtask cluster is operated, data generated in the operation process can be directly transmitted among a plurality of services through process communication on the first server, so that one-time downloading of the file to be processed can be realized, the plurality of services can be used, the repeated downloading of the file to be processed by the plurality of services can be avoided, and a plurality of repeated works such as data preparation, vectorization and the like can also be avoided, thereby saving resources.
In addition, because certain time is consumed for downloading the files to be processed and preprocessing work, the downloading times and the preprocessing times of the files to be processed are reduced, the task processing time of the files to be processed can be saved, and the task processing efficiency is improved.
Further, based on the above-mentioned embodiment of the first server, referring to fig. 5, a third flowchart of the task processing method in the embodiment of the present invention is shown and applied to the first server. As shown in fig. 5, the method may include the steps of:
step 501, if a target scheduling command sent by a second server is received, obtaining a file to be processed and a subtask cluster.
The subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster.
Step 502, based on the service identifier of each subtask in the subtask cluster, obtaining the image file of the inference model for implementing each subtask in the subtask cluster from a pre-constructed image warehouse.
Step 503, instantiating each image file to obtain a container of each image file.
Step 504, creating a container combination of the subtask cluster based on the container of each image file; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
The above step 501 is similar to the step 104 of the first embodiment, and the explanation thereof can refer to the step 104 of the first embodiment, which will not be described again here.
In step 502, the image file of the inference model for implementing the subtask may have three sources, the first source is a public image repository, the second source is a private image repository, and the third source is a local image repository. Based on the service identifier corresponding to each subtask in the subtask cluster, the image file of the inference model for realizing each subtask in the subtask cluster is pulled from any one of the three sources according to actual conditions.
It should be noted that, in order to successfully pull the image file of the inference model for implementing each subtask in the subtask cluster, before the image file is acquired, the image file needs to be created and uploaded to the repository. Specifically, the inference model for realizing each subtask in the subtask cluster can be encapsulated to generate a corresponding mirror image file; in order to enable the inference models to be used in combination and ensure smooth data transmission between subsequent services, input data and/or output data of the inference models for realizing the subtasks in the subtask cluster may need to be subjected to data format conversion, so that the input data and the output data of the inference models for realizing the subtasks are consistent in format based on the execution sequence of the subtasks in the subtask cluster.
In step 503, after the image files of the inference model for each subtask in the subtask cluster are obtained, a container of each image file is created based on each image file. Specifically, each image file is instantiated to obtain a container of each image file, that is, each image file is started to run to obtain a container of each image file.
In step 504, the first server creates a container combination of the subtask cluster based on the container of each image file, mainly by setting a data transfer direction between the containers, that is, by setting a data flow direction between the containers, the container combination of the subtask cluster is created. Specifically, the execution sequence of the subtasks corresponding to each container can be obtained; or, the information is transmitted by combining the first data in the configuration information; alternatively, the data flow direction between the containers is set by combining the first data transfer information and the second data transfer information in the configuration information to set the data transfer direction between the containers.
For example, it is assumed that the subtask cluster includes a subtask a, a subtask B, and a subtask C, and the execution sequence of the subtask cluster is a- > B- > C, and the subtask cluster respectively corresponds to a container a (corresponding identifier B1), a container B (corresponding identifier B2), and a container C (corresponding identifier B3), and the first server sets a data flow direction among the container a, the container B, and the container C, such as the container a- > the container B- > the container C, so that when data transmission is performed, the first server transmits output data of the container a to the container B through an inter-process communication method based on the identifier B2, and transmits output data of the container corresponding to the identifier B2 to the container C through an inter-process communication method based on the identifier B3. In this way, data flows from container a to container B and finally to container C, and when container a has not processed the service corresponding to subtask a, container B waits until the output data of container a is acquired.
In practical applications, the containers in the container combination may be identified by the service identifiers of the corresponding subtasks, for example, if the data stream set by the first server is directed to container a- > container B, when the container a sends the output data, the container B corresponding to the subtask B is identified based on the service identifier of the subtask B, so as to send the output data to the container B. Of course, the container a may also carry the service identifier of the subtask a when transmitting data, and correspondingly, the container B may also determine whether the received data is transmitted by the container a based on the service identifier of the subtask a when receiving data. Therefore, the correctness of the data flow direction is ensured through the directional sending of the sending end container and the judgment of the receiving end container.
In this embodiment, a container combination of the subtask cluster is created by setting a data transfer direction between containers, if an image file of each container in the container combination is operated, task processing can be performed on the file to be processed, one-time downloading of the file to be processed can be achieved, a plurality of services can be used, repeated downloading of the file to be processed by the plurality of services is avoided, and moreover, many repeated tasks such as data preparation and vectorization can be avoided, so that time and resources can be saved, and task processing efficiency can be improved.
And the input and output formats among the reasoning models of the subtasks are standardized, and the reasoning models of the subtasks with the standardized formats are packaged into the mirror image file, so that the combined use among the reasoning models of the subtasks can be realized, the output efficiency of the reasoning models can be improved, and the manpower and computing expenditure can be reduced.
In order to further improve the task processing efficiency, based on the second embodiment, the step 504 specifically includes:
and arranging the containers of the image files based on the execution sequence of each subtask in the subtask cluster to obtain a container combination corresponding to the subtask cluster.
The first server may organize the container of each image file as follows: and pre-configuring the sending path of the container of each image file, so that the container of each image file has only one sending path, namely, the routing configuration is already performed for the data sending direction of the container of each image file. Accordingly, each container of the image file can transmit its output data through its configured transmission path.
Referring to fig. 6, a schematic diagram of a container assembly after container arrangement according to an embodiment of the present invention is shown. As shown in fig. 6, the container combination includes a container a, a container B, and a container C, where the container a includes an image file of the inference model of the subtask a and a service corresponding to the subtask a, the container B includes an image file of the inference model of the subtask B and a service corresponding to the subtask B, and the container C includes an image file of the inference model of the subtask C and a service corresponding to the subtask C. Because the execution sequence of each subtask in the subtask cluster is A- > B- > C, the arrangement sequence of each container is also container A- > container B- > container C, so that after arrangement, directional sending is not needed to be performed on a sending end container, and judgment is not needed to be performed on a receiving end container to ensure the correctness of the data flow direction, so that the task processing efficiency can be further improved.
In practical application, the subtasks are realized through an inference model, and referring to fig. 7, a second schematic diagram of container combination after container arrangement in the embodiment of the present invention is shown. As shown in fig. 7, the container combination includes a container a, a container B, and a container C, where the container a includes an image file of the inference model a, the container B includes an image file of the inference model B, and the container C includes an image file of the inference model C. Because the execution sequence of each subtask corresponding to the inference model is A- > B- > C, the arrangement sequence of each container is also container A- > container B- > container C, and thus after arrangement, directional sending is not needed to be performed on a sending end container, and judgment is not needed to be performed on a receiving end container to ensure correctness of data flow direction, so that the task processing efficiency can be further improved.
Further, for a case that data formats of inference models corresponding to two adjacent subtasks in the subtask cluster after the data format conversion are not completely consistent, the combined configuration information of the target scheduling command sent by the second server may include a scheduling parameter for indicating creation of a target container, where the target container is used for data format conversion. The first server responds to a target scheduling command and creates a target container for data format conversion; the target container is arranged between a first container and a second container in the container combination, and the first container and the second container are containers of subtasks corresponding to two adjacent inference models with unmatched data formats in the subtask cluster. As shown in fig. 6 and 7, the first container is container B, the second container is container C, and a target container is set between container B and container C to format-convert the output data of container B to match the format of the input data of container C.
Further, after the container combination is created, in order to implement task processing on the file to be processed, the first server may actively or passively trigger operation of the mirror image file of each container in the container combination, so as to perform task processing on the file to be processed.
The following describes in detail an implementation process of the first server performing task processing on the file to be processed based on the container combination.
Referring to fig. 8, a fourth flowchart of a task processing method in the embodiment of the present invention is shown, which is applied to the first server. As shown in fig. 8, the method may include the steps of:
step 801, acquiring a container combination corresponding to a subtask cluster of a file to be processed.
The subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and mirror images of inference models for realizing the subtasks are contained in the containers.
Step 802, determining a processing order of the image file of each container in the container combination according to an execution order of each subtask in the subtask cluster.
And 803, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed based on the processing sequence.
In step 801, the container combination may be created in advance, for example, the container combination is created by scheduling the first server for the second server as mentioned in the above embodiment.
In step 802, the processing order of the image file may be understood as a data transmission direction of a container corresponding to the image file, and the processing order of the image file of each container in the container combination is generally consistent with an execution order of the sub-tasks, that is, based on the processing order, each sub-task in the execution order may be processed by sequentially using the image file of each container in the container combination, that is, task processing may be performed on the file to be processed.
In step 803, specifically, the container combination includes M containers, an ith container contains an ith image file, and the ith image file is an image file of an ith inference model for implementing the ith subtask; wherein M is a positive integer, and i takes the values of 1, 2, … … and M;
referring to fig. 9, a detailed flowchart of step 803 of the task processing method in the embodiment of the present invention is shown. As shown in fig. 9, the step 803 specifically includes the following steps:
step 8031, obtain first target data.
When i is equal to 1, the first target data is the file to be processed, and when i is larger than 1, the first target data is output data obtained by operating the (i-1) th mirror image file based on input data of an (i-1) th inference model;
step 8032, based on the first target data, operating the ith mirror image file to obtain second target data; when i is smaller than M, the second target data is used as input data of the (i + 1) th inference model; and when i is equal to M, determining the second target data as a task processing output result of the subtask cluster.
In step 8031, for each container, first target data is acquired, where the first target data is a file to be processed when the container is the 1 st container, and the first target data is output data obtained by executing a previous image file based on input data of a previous inference model when the container is not the 1 st container.
In step 8032, for each container, based on the first target data, running the image file in the container to obtain second target data; when the container is not the last container in the container combination, the second target data is used as input data of a subsequent inference model; and when the container is the last container in the container combination, determining the second target data as a task processing output result of the subtask cluster.
Further, based on the embodiment shown in fig. 9, if the data formats of the i-1 th inference model and the i-th inference model are not matched, the container combination further includes a target container, the target container is located between the i-1 th container and the i-th container, and the target container is used for converting the output data format of the i-1 th inference model according to the input data format of the i-th inference model;
the step 8032 specifically includes:
taking the first target data as input data of the target container, and operating the target container to obtain target format data; wherein the format of the target format data is matched with the input data format of the ith inference model;
and taking the target format data as input data of the ith inference model, and operating the ith mirror image file to obtain second target data.
That is, for a first container and a second container, the first container and the second container are containers of subtasks corresponding to two adjacent inference models whose data formats in the subtask cluster do not match. And a target container is arranged between the first container and the second container, the former container of the target container is the first container, and the latter container of the target container is the second container.
Specifically, for a target container, the output data of the previous container, i.e., the first target data, is used as the input data of the target container, the target container is operated to obtain target format data, the format of the target format data is matched with the format of the input data of the next inference model, and the target format data is transmitted to the next container. And aiming at the next container of the target container, the target format data is used as the input data of the next inference model, and the next mirror image file is operated to obtain second target data.
In addition, any adjacent container in the container combination transmits the output data of the former container to the latter container based on an inter-process communication mode.
According to the task processing method provided by the embodiment of the invention, the mirror image file of each container in the container combination is operated to perform task processing on the file to be processed, so that the inference model of each subtask in the subtask cluster can be combined and used through the built combination model of the inference model. Due to the fact that the files to be processed are required to be downloaded only once for the combined model, the files to be processed can be prevented from being required to be downloaded repeatedly for each inference model, and repeated preprocessing work such as data preparation, vectorization and the like can be avoided, so that the files to be processed can be downloaded and preprocessed once, the inference models can be used, resources can be saved, and task processing efficiency can be improved.
The following is an example to describe the task processing method provided by the embodiment of the present invention in detail.
Application scenarios: the to-be-processed task comprises three subtasks, namely a subtask A, a subtask B and a subtask C, wherein the execution sequence of the subtask A, the subtask B and the subtask C is A- > B- > C, the subtask A corresponds to an inference model A and is used for object segmentation, the subtask B corresponds to an inference model B and is used for face recognition, and the subtask C corresponds to an inference model C and is used for specific suspect recognition.
Firstly, unifying data transmission formats of a reasoning model A, a reasoning model B and a reasoning model C, defining output data of the reasoning model A as a 128-bit vector, and defining input data of the reasoning model B as a 128-bit vector, wherein due to the reason of the reasoning model, the situation that the data formats of the reasoning models cannot be unified exists, for example, the output data of the reasoning model B is a 128-bit vector, and the input data of the reasoning model C is a 256-bit vector.
And then, packaging the inference model A, the inference model B and the inference model C respectively to generate mirror image files of each inference model, namely the mirror image file A, the mirror image file B and the mirror image file C, and uploading the mirror image files to a warehouse of the mirror image files.
Then, the second server obtains the type of the file to be processed and the task to be processed aiming at the file to be processed;
then, the second server queries and obtains a subtask cluster of the to-be-processed task from a preset configuration file based on the type of the to-be-processed file and the attribute of the to-be-processed task, wherein the subtask cluster is a subtask A, a subtask B and a subtask C, and the execution sequence of the subtask cluster is A- > B- > C; the configuration file comprises a mapping table based on the type of the file to be processed, the attribute of the task to be processed and the subtask cluster;
then, the second server determines identification information of a mirror image file of an inference model for realizing each subtask in the subtask cluster;
then, the second server performs container configuration on the image file of the inference model for realizing each subtask to obtain container configuration information corresponding to each subtask; the container configuration information corresponding to the subtasks is used for indicating that containers are created for the subtasks;
then, the second server combines the container configuration information corresponding to each subtask to generate a target scheduling command comprising the combined configuration information; the combination configuration information is used for indicating that a container combination is created for a container corresponding to each subtask;
then, the second server sends the target scheduling command to the first server;
then, the first server receives the target scheduling command, responds to the target scheduling command, and acquires a file to be processed and a subtask cluster, wherein the subtask cluster comprises a subtask A, a subtask B and a subtask C;
then, the first server continuously responds to the target scheduling command, and mirror image files of inference models for realizing the subtasks A, B and C are respectively obtained and are the mirror image file A, the mirror image file B and the mirror image file C;
then, the first server instantiates each image file to obtain containers of each image file, wherein the containers are a container A, a container B and a container C;
then, the first server arranges the container A, the container B and the container C based on the execution sequence A- > B- > C, configures the sending path of each container, and obtains a container combination corresponding to the subtask cluster; due to the mismatch of data formats between inference model B and inference model C, correspondingly, the first server continues to respond to the target scheduling command, create a target container for data format conversion, and place it between container B and container C in order to convert the output data of the 128-bit vector of container B into the input data of the 256-bit vector of container C, as shown in fig. 7; after the final arrangement, defining the data transfer directions of the container A, the container B, the target container and the container C in the created container combination, wherein the data transfer directions are container A- > container B- > target container- > container C;
then, as shown in fig. 7, the first server starts a container a in the container combination, takes the file to be processed as input data of the container a, and runs a mirror image file of the inference model a to obtain output data of the container a; and transmitting the output data of the container A to the container B;
then, the first server starts a container B in the container combination, takes the output data of the container A as the input data of the container B, and runs a mirror image file of an inference model B to obtain the output data of the container B; and transmitting the output data of the container B to the target container;
then, the first server starts a target container, takes the output data of the container B as the input data of the target container, and operates the target container to obtain target format data; and transmits the object format data to the container C;
then, the first server starts the container C, takes the target format data as the input data of the container C, operates the container C, and obtains the output data of the container C; the output data of the container C is the task processing output result;
and finally, the first server determines the task processing output result.
The following describes a task processing device according to an embodiment of the present invention.
Referring to fig. 10, a schematic structural diagram of a task processing device according to an embodiment of the present invention is shown, which is applied to a first server. As shown in fig. 10, the task processing device 1000 includes:
a first obtaining module 1001, configured to obtain a to-be-processed file and a subtask cluster if a target scheduling command sent by a second server is received; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
a creating module 1002, configured to create a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
Optionally, referring to fig. 11, a detailed structural diagram of a creating module of the task processing device in the embodiment of the present invention is shown. As shown in fig. 11, the creating module 1002 specifically includes:
a first obtaining unit 10021, configured to obtain, from a pre-constructed mirror image warehouse, a mirror image file of an inference model for implementing each subtask in the subtask cluster based on a service identifier of each subtask in the subtask cluster;
an instantiation unit 10022, configured to instantiate each image file, to obtain a container for each image file;
a first creating unit 10023, configured to create a container combination of the subtask clusters based on the container of each image file.
Optionally, each subtask in the subtask cluster is associated in the execution sequence; the first creating unit 10023 is specifically configured to arrange the container of each image file based on the execution sequence of each subtask in the subtask cluster, so as to obtain a container combination corresponding to the subtask cluster.
Optionally, as shown in fig. 11, the creating module 1002 further includes:
a second creating unit 10024, configured to create a target container for data format conversion when there is a mismatch between data formats of inference models corresponding to two adjacent subtasks in the subtask cluster;
the target container is arranged between a first container and a second container in the container combination, and the first container and the second container are containers of subtasks corresponding to two adjacent inference models with unmatched data formats in the subtask cluster.
The device provided by the embodiment of the present invention can implement each process implemented in the first server-side method embodiment, and can achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
Referring to fig. 12, a second schematic structural diagram of a task processing device according to an embodiment of the present invention is shown, and is applied to a second server. As shown in fig. 12, the task processing device 1200 includes:
a second obtaining module 1201, configured to obtain a subtask cluster of a to-be-processed task for a to-be-processed file, where the subtask cluster includes at least two related subtasks of the to-be-processed task;
a generating module 1202, configured to generate a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, wherein the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file of an inference model for realizing the subtask is contained in the container; if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized;
a sending module 1203, configured to send the target scheduling command to the first server.
Optionally, referring to fig. 13, a detailed structural diagram of the task processing device seed generation module in the embodiment of the present invention is shown. As shown in fig. 13, the generating module 1202 specifically includes:
a determining unit 12021, configured to determine identification information of an image file of an inference model that implements each subtask in the subtask cluster;
a configuration unit 12022, configured to perform container configuration on the image file of the inference model of each subtask, and obtain container configuration information corresponding to each subtask; the container configuration information corresponding to the subtasks is used for indicating that containers are created for the subtasks;
a combining unit 12023, configured to combine the container configuration information corresponding to each of the sub-tasks, and generate a target scheduling command including the combined configuration information; and the combination configuration information is used for indicating that a container combination is created for the container corresponding to each subtask.
The device provided by the embodiment of the present invention can implement each process implemented in the second server-side method embodiment described above, and can achieve the same beneficial effects, and for avoiding repetition, details are not described here again.
Referring to fig. 14, a third schematic structural diagram of a task processing device according to an embodiment of the present invention is shown, applied to a first server. As shown in fig. 14, the task processing device 1400 includes:
a third obtaining module 1401, configured to obtain a container combination corresponding to a subtask cluster of a file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
a determining module 1402, configured to determine, according to an execution order of each subtask in the subtask cluster, a processing order of the image file of each container in the container combination;
and a task processing module 1403, configured to perform task processing on the to-be-processed file sequentially by using the image file of each container in the container combination based on the processing sequence.
Optionally, the container combination includes M containers, an ith container contains an ith image file, and the ith image file is an image file of an ith inference model for implementing the ith subtask; wherein M is a positive integer, and i takes the values of 1, 2, … … and M;
referring to fig. 15, a detailed structural diagram of a task processing module of the task processing device in the embodiment of the present invention is shown. As shown in fig. 15, the task processing module 1403 specifically includes:
a second acquiring unit 14031 for acquiring the first target data; when i is equal to 1, the first target data is the file to be processed, and when i is larger than 1, the first target data is output data obtained by operating the (i-1) th mirror image file based on input data of an (i-1) th inference model;
an operation unit 14032, configured to operate the ith image file based on the first target data, so as to obtain second target data; when i is smaller than M, the second target data is used as input data of the (i + 1) th inference model; and when i is equal to M, determining the second target data as a task processing output result of the subtask cluster.
Optionally, if the data formats of the i-1 th inference model and the i-th inference model are not matched, the container combination further includes a target container, the target container is located between the i-1 th container and the i-th container, and the target container is used for converting the output data format of the i-1 th inference model according to the input data format of the i-th inference model;
the running unit 14032 is specifically configured to use the first target data as input data of the target container, run the target container, and obtain target format data; wherein the format of the target format data is matched with the input data format of the ith inference model; and taking the target format data as input data of the ith inference model, and operating the ith mirror image file to obtain second target data.
Optionally, any adjacent container in the container combination transfers the output data of the previous container to the next container based on an inter-process communication mode.
The device provided by the embodiment of the present invention can implement each process implemented in the first server-side method embodiment, and can achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
The following describes an electronic device provided in an embodiment of the present invention.
An embodiment of the present invention further provides an electronic device, as shown in fig. 16, including a first processor 1601, a first communication interface 1602, a first memory 1603, and a first communication bus 1604, where the first processor 1601, the first communication interface 1602, and the first memory 1603 complete communication with each other via the first communication bus 1604,
a first memory 1603 for storing a computer program;
the first processor 1601 is configured to execute the program stored in the first memory 1603, and implement the following steps:
if a target scheduling command sent by a second server is received, acquiring a file to be processed and a subtask cluster; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
Optionally, the first processor 1601 is specifically configured to:
acquiring a mirror image file for realizing a reasoning model of each subtask in the subtask cluster from a pre-constructed mirror image warehouse based on the service identification of each subtask in the subtask cluster;
instantiating each image file to obtain a container of each image file;
and creating a container combination of the subtask cluster based on the container of each image file.
Optionally, the first processor 1601 is specifically configured to:
each subtask in the subtask cluster is related in the execution sequence; and arranging the containers of the image files based on the execution sequence of each subtask in the subtask cluster to obtain a container combination corresponding to the subtask cluster.
Optionally, the first processor 1601 is specifically configured to:
under the condition that the data formats of inference models corresponding to two adjacent subtasks are not matched in the subtask cluster, creating a target container for data format conversion;
the target container is arranged between a first container and a second container in the container combination, and the first container and the second container are containers of subtasks corresponding to two adjacent inference models with unmatched data formats in the subtask cluster.
Further, the first processor 1601 is configured to execute the program stored in the first memory 1603, and further performs the following steps:
acquiring a container combination corresponding to a subtask cluster of a file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
determining the processing sequence of the image file of each container in the container combination according to the execution sequence of each subtask in the subtask cluster;
and based on the processing sequence, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed.
Optionally, the container combination includes M containers, an ith container contains an ith image file, and the ith image file is an image file of an ith inference model for implementing the ith subtask; wherein M is a positive integer, and i takes the values of 1, 2, … … and M;
the first processor 1601 is specifically configured to:
acquiring first target data; when i is equal to 1, the first target data is the file to be processed, and when i is larger than 1, the first target data is output data obtained by operating the (i-1) th mirror image file based on input data of an (i-1) th inference model;
running the ith mirror image file based on the first target data to obtain second target data; when i is smaller than M, the second target data is used as input data of the (i + 1) th inference model; and when i is equal to M, determining the second target data as a task processing output result of the subtask cluster.
Optionally, if the data formats of the i-1 th inference model and the i-th inference model are not matched, the container combination further includes a target container, the target container is located between the i-1 th container and the i-th container, and the target container is used for converting the output data format of the i-1 th inference model according to the input data format of the i-th inference model;
the first processor 1601 is specifically configured to:
taking the first target data as input data of the target container, and operating the target container to obtain target format data; wherein the format of the target format data is matched with the input data format of the ith inference model;
and taking the target format data as input data of the ith inference model, and operating the ith mirror image file to obtain second target data.
Optionally, any adjacent container in the container combination transfers the output data of the previous container to the next container based on an inter-process communication mode.
The embodiment of the present invention further provides an electronic device, as shown in fig. 17, including a second processor 1701, a second communication interface 1702, a second memory 1703 and a second communication bus 1704, where the second processor 1701, the second communication interface 1702 and the second memory 1703 complete communication with each other through the second communication bus 1704,
a second memory 1703 for storing computer programs;
the second processor 1701 is configured to execute the program stored in the second memory 1703, and implement the following steps:
acquiring a subtask cluster of a to-be-processed task for a to-be-processed file, wherein the subtask cluster comprises at least two related subtasks of the to-be-processed task;
generating a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, the container combination comprises a container corresponding to each subtask in the subtask cluster, a mirror image file of an inference model for realizing the subtask is contained in the container, and if the mirror image file of each container in the container combination is operated, task processing can be performed on the file to be processed;
and sending the target scheduling command to the first server.
Optionally, the second processor 1701 is specifically configured to:
determining identification information of a mirror image file of an inference model for realizing each subtask in the subtask cluster;
carrying out container configuration on the mirror image file of the inference model of each subtask to obtain container configuration information corresponding to each subtask; the container configuration information corresponding to the subtasks is used for indicating that containers are created for the subtasks;
combining the container configuration information corresponding to each subtask to generate a target scheduling command comprising the combined configuration information; and the combination configuration information is used for indicating that a container combination is created for the container corresponding to each subtask.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 16 and 17, but this does not mean only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which instructions are stored, and when the instructions are executed on a computer, the computer is enabled to execute the task processing method in any one of the above-mentioned first server-side embodiments; or executing any of the task processing methods in the second server-side embodiment.
In another embodiment of the present invention, a computer program product containing instructions is further provided, which when run on a computer causes the computer to execute the task processing method described in any of the first server-side embodiments above.
In another embodiment of the present invention, a computer program product containing instructions is further provided, which when run on a computer causes the computer to execute the task processing method described in any of the second server-side embodiments above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (16)

1. A task processing method is applied to a first server, and is characterized in that the method comprises the following steps:
if a target scheduling command sent by a second server is received, acquiring a file to be processed and a subtask cluster; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
2. The method of claim 1, wherein the step of creating a container combination for the subtask cluster comprises:
acquiring a mirror image file for realizing a reasoning model of each subtask in the subtask cluster from a pre-constructed mirror image warehouse based on the service identification of each subtask in the subtask cluster;
instantiating each image file to obtain a container of each image file;
and creating a container combination of the subtask cluster based on the container of each image file.
3. The method of claim 2, wherein the subtasks in the subtask cluster are associated in an execution order; the step of creating a container combination of the subtask cluster based on the container of each image file comprises:
and arranging the containers of the image files based on the execution sequence of each subtask in the subtask cluster to obtain a container combination corresponding to the subtask cluster.
4. The method according to claim 3, wherein before the container of each image file is arranged based on the execution sequence of each subtask in the subtask cluster and a container combination corresponding to the subtask cluster is obtained, the method further comprises:
under the condition that the data formats of inference models corresponding to two adjacent subtasks are not matched in the subtask cluster, creating a target container for data format conversion;
the target container is arranged between a first container and a second container in the container combination, and the first container and the second container are containers of subtasks corresponding to two adjacent inference models with unmatched data formats in the subtask cluster.
5. A task processing method is applied to a second server, and is characterized in that the method comprises the following steps:
acquiring a subtask cluster of a to-be-processed task for a to-be-processed file, wherein the subtask cluster comprises at least two related subtasks of the to-be-processed task;
generating a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, the container combination comprises a container corresponding to each subtask in the subtask cluster, a mirror image file of an inference model for realizing the subtask is contained in the container, and if the mirror image file of each container in the container combination is operated, task processing can be performed on the file to be processed;
and sending the target scheduling command to the first server.
6. The method of claim 5, wherein generating a target scheduling command based on the subtask cluster comprises:
determining identification information of a mirror image file of an inference model for realizing each subtask in the subtask cluster;
carrying out container configuration on the mirror image file of the inference model of each subtask to obtain container configuration information corresponding to each subtask; the container configuration information corresponding to the subtasks is used for indicating that containers are created for the subtasks;
combining the container configuration information corresponding to each subtask to generate a target scheduling command comprising the combined configuration information; and the combination configuration information is used for indicating that a container combination is created for the container corresponding to each subtask.
7. A task processing method is applied to a first server, and is characterized in that the method comprises the following steps:
acquiring a container combination corresponding to a subtask cluster of a file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
determining the processing sequence of the image file of each container in the container combination according to the execution sequence of each subtask in the subtask cluster;
and based on the processing sequence, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed.
8. The method according to claim 7, wherein the container combination comprises M containers, an ith container contains an ith image file, and the ith image file is an image file of an ith inference model for implementing an ith subtask; wherein M is a positive integer, and i takes the values of 1, 2, … … and M;
based on the processing sequence, sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed, and the method comprises the following steps:
acquiring first target data; when i is equal to 1, the first target data is the file to be processed, and when i is larger than 1, the first target data is output data obtained by operating the (i-1) th mirror image file based on input data of an (i-1) th inference model;
running the ith mirror image file based on the first target data to obtain second target data; when i is smaller than M, the second target data is used as input data of the (i + 1) th inference model; and when i is equal to M, determining the second target data as a task processing output result of the subtask cluster.
9. The method according to claim 8, wherein if the data formats of the i-1 th inference model and the i-th inference model do not match, the container combination further comprises a target container, the target container is located between the i-1 th container and the i-th container, and the target container is used for converting the output data format of the i-1 th inference model according to the input data format of the i-th inference model;
the operating the ith mirror image file based on the first target data to obtain second target data includes:
taking the first target data as input data of the target container, and operating the target container to obtain target format data; wherein the format of the target format data is matched with the input data format of the ith inference model;
and taking the target format data as input data of the ith inference model, and operating the ith mirror image file to obtain second target data.
10. The method according to any one of claims 7 to 9, wherein any adjacent container in the container combination transfers the output data of the previous container to the next container based on an inter-process communication manner.
11. A task processing apparatus applied to a first server, the apparatus comprising:
the first acquisition module is used for acquiring the file to be processed and the subtask cluster if a target scheduling command sent by the second server is received; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, and the target scheduling command is used for indicating the first server to create a container combination for the subtask cluster;
the creating module is used for creating a container combination for the subtask cluster; the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file for realizing an inference model of the subtask is contained in the container; and if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized.
12. A task processing apparatus applied to a second server, the apparatus comprising:
the second acquisition module is used for acquiring a subtask cluster of a to-be-processed task for the to-be-processed file, wherein the subtask cluster comprises at least two related subtasks of the to-be-processed task;
the generating module is used for generating a target scheduling command based on the subtask cluster; the target scheduling command is used for indicating a first server to create a container combination for the subtask cluster, wherein the container combination comprises a container corresponding to each subtask in the subtask cluster, and a mirror image file of an inference model for realizing the subtask is contained in the container; if the mirror image file of each container in the container combination is operated, the task processing of the file to be processed can be realized;
a sending module, configured to send the target scheduling command to the first server.
13. A task processing apparatus applied to a first server, the apparatus comprising:
the third acquisition module is used for acquiring a container combination corresponding to the subtask cluster of the file to be processed; the subtask cluster comprises at least two related subtasks of the to-be-processed tasks of the to-be-processed files, the container combination comprises a container corresponding to each subtask in the subtask cluster, and the container contains mirror images for realizing inference models of the subtasks:
the determining module is used for determining the processing sequence of the image file of each container in the container combination according to the execution sequence of each subtask in the subtask cluster;
and the task processing module is used for sequentially utilizing the mirror image file of each container in the container combination to perform task processing on the file to be processed based on the processing sequence.
14. An electronic device is a first server and is characterized by comprising a first processor, a first communication interface, a first memory and a first communication bus, wherein the first processor, the first communication interface and the first memory complete mutual communication through the first communication bus;
a first memory for storing a computer program;
a first processor arranged to perform the method steps of any one of claims 1 to 4 and 7 to 10 when executing a program stored in the memory.
15. An electronic device is a second server and is characterized by comprising a second processor, a second communication interface, a second memory and a second communication bus, wherein the second processor, the second communication interface and the second memory complete communication with each other through the second communication bus;
a second memory for storing a computer program;
a second processor arranged to perform the method steps of any of claims 5 to 6 when executing the program stored in the memory.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a first processor, carries out the method of any one of claims 1-4 and 7-10, or which, when being executed by a second processor, carries out the method of any one of claims 5-6.
CN201911118848.3A 2019-11-15 2019-11-15 Task processing method and device, electronic equipment and computer readable storage medium Active CN110888722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911118848.3A CN110888722B (en) 2019-11-15 2019-11-15 Task processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911118848.3A CN110888722B (en) 2019-11-15 2019-11-15 Task processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110888722A true CN110888722A (en) 2020-03-17
CN110888722B CN110888722B (en) 2022-05-20

Family

ID=69747548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911118848.3A Active CN110888722B (en) 2019-11-15 2019-11-15 Task processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110888722B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352717A (en) * 2020-03-24 2020-06-30 广西梯度科技有限公司 Method for realizing kubernets self-defined scheduler
CN111427665A (en) * 2020-03-27 2020-07-17 合肥本源量子计算科技有限责任公司 Quantum application cloud platform and quantum computing task processing method
CN111625374A (en) * 2020-05-15 2020-09-04 北京达佳互联信息技术有限公司 Task processing method, terminal and storage medium
CN111694640A (en) * 2020-06-10 2020-09-22 北京奇艺世纪科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111858234A (en) * 2020-06-19 2020-10-30 浪潮电子信息产业股份有限公司 Task execution method, device, equipment and medium
CN112015535A (en) * 2020-08-28 2020-12-01 苏州科达科技股份有限公司 Task processing method and device, electronic equipment and storage medium
CN112114950A (en) * 2020-09-21 2020-12-22 中国建设银行股份有限公司 Task scheduling method and device and cluster management system
CN112764898A (en) * 2021-01-18 2021-05-07 北京思特奇信息技术股份有限公司 Method and system for scheduling tasks among containers
CN112839239A (en) * 2020-12-30 2021-05-25 广州虎牙科技有限公司 Audio and video processing method and device and server
CN113010280A (en) * 2021-02-19 2021-06-22 北京字节跳动网络技术有限公司 Distributed task processing method, system, device, equipment and medium
CN113191502A (en) * 2021-04-21 2021-07-30 烽火通信科技股份有限公司 Artificial intelligence model on-line training method and system
CN113378030A (en) * 2021-05-18 2021-09-10 上海德衡数据科技有限公司 Search method of search engine, search engine architecture, device and storage medium
CN113608751A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Operation method, device and equipment of reasoning service platform and storage medium
CN113703784A (en) * 2021-08-25 2021-11-26 上海哔哩哔哩科技有限公司 Data processing method and device based on container arrangement
CN113806035A (en) * 2021-03-09 2021-12-17 京东科技控股股份有限公司 Distributed scheduling method and service server
CN113835828A (en) * 2021-08-23 2021-12-24 深圳致星科技有限公司 AI inference method, system, electronic device, readable storage medium and product
WO2023124000A1 (en) * 2021-12-31 2023-07-06 中国第一汽车股份有限公司 Multi-concurrency data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101438286A (en) * 2006-05-05 2009-05-20 奥沐尼芬有限公司 A method of enabling digital music content to be downloaded to and used on a portable wireless computing device
CN106778959A (en) * 2016-12-05 2017-05-31 宁波亿拍客网络科技有限公司 A kind of specific markers and method system that identification is perceived based on computer vision
CN109040152A (en) * 2017-06-08 2018-12-18 阿里巴巴集团控股有限公司 A kind of service request and providing method based on service orchestration, device and electronic equipment
CN109766184A (en) * 2018-12-28 2019-05-17 北京金山云网络技术有限公司 Distributed task scheduling processing method, device, server and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101438286A (en) * 2006-05-05 2009-05-20 奥沐尼芬有限公司 A method of enabling digital music content to be downloaded to and used on a portable wireless computing device
CN106778959A (en) * 2016-12-05 2017-05-31 宁波亿拍客网络科技有限公司 A kind of specific markers and method system that identification is perceived based on computer vision
CN109040152A (en) * 2017-06-08 2018-12-18 阿里巴巴集团控股有限公司 A kind of service request and providing method based on service orchestration, device and electronic equipment
CN109766184A (en) * 2018-12-28 2019-05-17 北京金山云网络技术有限公司 Distributed task scheduling processing method, device, server and system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352717A (en) * 2020-03-24 2020-06-30 广西梯度科技有限公司 Method for realizing kubernets self-defined scheduler
CN111352717B (en) * 2020-03-24 2023-04-07 广西梯度科技股份有限公司 Method for realizing kubernets self-defined scheduler
CN111427665A (en) * 2020-03-27 2020-07-17 合肥本源量子计算科技有限责任公司 Quantum application cloud platform and quantum computing task processing method
CN111625374A (en) * 2020-05-15 2020-09-04 北京达佳互联信息技术有限公司 Task processing method, terminal and storage medium
CN111625374B (en) * 2020-05-15 2023-06-27 北京达佳互联信息技术有限公司 Task processing method, terminal and storage medium
CN111694640A (en) * 2020-06-10 2020-09-22 北京奇艺世纪科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111694640B (en) * 2020-06-10 2023-04-21 北京奇艺世纪科技有限公司 Data processing method, device, electronic equipment and storage medium
CN111858234A (en) * 2020-06-19 2020-10-30 浪潮电子信息产业股份有限公司 Task execution method, device, equipment and medium
CN112015535A (en) * 2020-08-28 2020-12-01 苏州科达科技股份有限公司 Task processing method and device, electronic equipment and storage medium
CN112015535B (en) * 2020-08-28 2023-09-08 苏州科达科技股份有限公司 Task processing method, device, electronic equipment and storage medium
CN112114950A (en) * 2020-09-21 2020-12-22 中国建设银行股份有限公司 Task scheduling method and device and cluster management system
CN112839239A (en) * 2020-12-30 2021-05-25 广州虎牙科技有限公司 Audio and video processing method and device and server
CN112764898A (en) * 2021-01-18 2021-05-07 北京思特奇信息技术股份有限公司 Method and system for scheduling tasks among containers
CN113010280A (en) * 2021-02-19 2021-06-22 北京字节跳动网络技术有限公司 Distributed task processing method, system, device, equipment and medium
CN113806035A (en) * 2021-03-09 2021-12-17 京东科技控股股份有限公司 Distributed scheduling method and service server
CN113191502A (en) * 2021-04-21 2021-07-30 烽火通信科技股份有限公司 Artificial intelligence model on-line training method and system
CN113378030A (en) * 2021-05-18 2021-09-10 上海德衡数据科技有限公司 Search method of search engine, search engine architecture, device and storage medium
CN113608751A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Operation method, device and equipment of reasoning service platform and storage medium
CN113835828A (en) * 2021-08-23 2021-12-24 深圳致星科技有限公司 AI inference method, system, electronic device, readable storage medium and product
CN113703784A (en) * 2021-08-25 2021-11-26 上海哔哩哔哩科技有限公司 Data processing method and device based on container arrangement
WO2023124000A1 (en) * 2021-12-31 2023-07-06 中国第一汽车股份有限公司 Multi-concurrency data processing method and device

Also Published As

Publication number Publication date
CN110888722B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN110888722B (en) Task processing method and device, electronic equipment and computer readable storage medium
US20200326870A1 (en) Data pipeline architecture for analytics processing stack
US10884808B2 (en) Edge computing platform
CN108510082B (en) Method and device for processing machine learning model
Wienke et al. A middleware for collaborative research in experimental robotics
US9977701B2 (en) Remote communication and remote programming by application programming interface
CN113867600A (en) Development method and device for processing streaming data and computer equipment
KR20110065448A (en) Composing message processing pipelines
US9996344B2 (en) Customized runtime environment
CN113946389A (en) Federal learning process execution optimization method, device, storage medium, and program product
CN110941463B (en) Remote sensing satellite data preprocessing multistage product self-driven system
US6973659B2 (en) Mapping between remote procedure call system primitives and event driven execution environment system primitives
Franco da Silva et al. Customization and provisioning of complex event processing using TOSCA
US9537931B2 (en) Dynamic object oriented remote instantiation
CN116668520A (en) Gateway-based service arrangement method, system, equipment and storage medium
CN107103058B (en) Big data service combination method and composite service combination method based on Artifact
Pisarić et al. Towards a plug-and-play architecture in Industry 4.0
Krishnamurthy et al. Programming frameworks for Internet of Things
US11228502B2 (en) Aggregation platform, requirement owner, and methods thereof
Hassan et al. Toward the generation of deployable distributed IoT system on the cloud
CN113986255A (en) Method and device for deploying service and computer-readable storage medium
Bakulev et al. Moving Enterprise Integration Middleware toward the Distributed Stream Processing Architecture
US20110213844A1 (en) Udp multicast mass message transport
EP3751361A1 (en) System for action indication determination
Lytra et al. A pattern language for service-based platform integration and adaptation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant