WO2022247110A1 - 任务处理方法及装置、电子设备和存储介质 - Google Patents

任务处理方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022247110A1
WO2022247110A1 PCT/CN2021/124779 CN2021124779W WO2022247110A1 WO 2022247110 A1 WO2022247110 A1 WO 2022247110A1 CN 2021124779 W CN2021124779 W CN 2021124779W WO 2022247110 A1 WO2022247110 A1 WO 2022247110A1
Authority
WO
WIPO (PCT)
Prior art keywords
panorama
task
training
unit
operation unit
Prior art date
Application number
PCT/CN2021/124779
Other languages
English (en)
French (fr)
Inventor
杨凯
李韡
徐子豪
吴立威
高原
崔磊
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022247110A1 publication Critical patent/WO2022247110A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, and relate to, but are not limited to, a task processing method and device, electronic equipment, and a storage medium.
  • the embodiment of the present disclosure provides a task processing technical solution.
  • An embodiment of the present disclosure provides a task processing method, the method including:
  • Determining an operation unit and a resource unit for realizing the task to be processed wherein, the operation unit at least includes performing a processing operation on the task to be processed, and the resource unit includes input by the operation unit during execution of the processing operation and/or output data;
  • the task to be processed is processed to obtain a processing result.
  • the operation unit and the resource unit are at least two
  • the panorama includes a training panorama
  • the operation unit and the resource unit based on the task to be processed are constructed to include the A panorama of the processing flow of the task to be processed, including: among at least two resource units, determining a first resource unit as an input of each operation unit and a second resource unit as an output of each operation unit; Connect each operation unit with the corresponding first resource unit and the second resource unit to obtain the training panorama.
  • the panorama includes a first reasoning panorama
  • the panorama that includes the processing flow of the task to be processed is constructed based on the operation unit of the task to be processed and the resource unit, including: Determine in the front-end panorama file a target operation unit and a target resource unit that match the processing flow of the task to be processed; wherein, the front-end panorama file includes at least two operation units and at least two resource units; based on The target operation unit and the target resource unit construct the first reasoning panorama that does not include workflow data.
  • the panorama includes a first reasoning panorama
  • the panorama that includes the processing flow of the task to be processed is constructed based on the operation unit of the task to be processed and the resource unit, including: In the training panorama of the panorama, determine a target operation unit and a target resource unit that match the processing flow of the task to be processed; based on the target operation unit and the target resource unit, construct a workflow that does not include The first inference panorama of data. In this way, by selecting the target operation unit and the target resource unit in the training panorama, the first inference panorama can be built more quickly and conveniently.
  • each operation unit is connected to the corresponding first resource unit and the second resource unit, and after obtaining the training panorama, the method further includes: The operation unit including the model to be trained is trained; based on the operation unit including the trained model in the training panorama and the first inference panorama, determine a second inference panorama for inferring the task to be processed ; Wherein, the trained model is obtained by training the model to be trained. In this way, after the training is completed and the corresponding different module models are obtained, they can be directly imported into the inference graph for inference and use, which can quickly realize the processing of tasks to be processed in complex scenarios.
  • the training of the operation unit including the model to be trained in the training panorama includes: converting the training panorama of the front end into a training intermediate result map of the back end;
  • the preset graph template corresponding to each operation unit in the result graph constructs the first operation graph with a starting point; wherein, the preset graph template corresponding to each operation unit is set based on the task at the front end;
  • the starting point of a running graph is any operation unit in the training intermediate result graph; converting the first running graph into a training workflow capable of training the functions of the first running graph; based on the training work Stream, train the operation unit of the model to be trained.
  • the training the operation unit of the model to be trained based on the training workflow includes: determining the relationship between different operation units in the first operation diagram based on the training workflow. logical relationship; according to the logical relationship, the operation unit of the model to be trained is trained. In this way, by analyzing the logical relationship among multiple operating units, the training of the model to be trained in the operating units can be realized more accurately and reasonably.
  • the determining a second inference panorama for inferring the task to be processed based on the operation unit including the trained model in the training panorama and the first inference panorama includes : In the first inference panorama, determine the target operation unit that matches the operation unit in the training panorama that includes the trained model; import the trained model into the matched target operation unit, and obtain The second inference panorama. In this way, after the training is completed and different models are obtained, they can be directly imported into the inference graph for inference and use, which improves the speed of building the entire processing flow.
  • processing the task to be processed based on the panorama to obtain a processing result includes: inputting the task to be processed into a second reasoning panorama in the panorama;
  • the second reasoning panorama is used to process the task to be processed to obtain the processing result.
  • the processing of the task to be processed based on the second inference panorama to obtain the processing result includes: converting the second inference panorama of the front-end into a back-end inference intermediate Result diagram; based on the preset diagram template corresponding to each operation unit in the inference intermediate result diagram, construct a second operation diagram with a starting point; wherein, the starting point of the second operation diagram is the inference intermediate result Any operation unit in the figure; converting the second operation graph into an inference workflow; using the inference workflow to process the task to be processed to obtain the processing result.
  • the front-end inference graph into an inference workflow and seamlessly connecting nodes with different functions, the inference function of the entire processing process can be completed.
  • the operation unit at least includes: a detection dataset labeling unit, a matting unit, a detection unit, a classification dataset labeling unit, and a classification unit;
  • the resource unit at least includes: the data input and/or output by the detection data set labeling unit during the labeling operation, the data input and/or output by the matting unit during the matting operation, and the detection The data input and/or output by the unit during the execution of the detection operation, the data input and/or output by the classification data set labeling unit during the execution of the annotation operation, and the input and/or output of the classification unit during the execution of the classification operation output data.
  • An embodiment of the present disclosure provides a task processing device, and the device includes:
  • the first obtaining module is configured to obtain tasks to be processed
  • the first determination module is configured to determine an operation unit and a resource unit that realize the task to be processed; wherein the operation unit at least includes processing operations on the task to be processed, and the resource unit includes the operation unit that is executed by the operation unit data input and/or output during said processing operations;
  • the first building module is configured to build a panorama including the processing flow of the task to be processed based on the operation unit of the task to be processed and the resource unit;
  • the first processing module is configured to process the task to be processed based on the panorama to obtain a processing result.
  • An embodiment of the present disclosure provides a computer storage medium, on which computer executable instructions are stored. After the computer executable instructions are executed, the above task processing method can be implemented.
  • An embodiment of the present disclosure provides a computer device.
  • the computer device includes a memory and a processor.
  • Computer-executable instructions are stored in the memory.
  • the processor runs the computer-executable instructions in the memory, the above-mentioned Task processing method.
  • An embodiment of the present disclosure provides a computer program product, where the computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the task processing method described in any one of the foregoing can be implemented.
  • Embodiments of the present disclosure provide a task processing method and device, electronic equipment, and a storage medium.
  • For the acquired task to be processed first, analyze the operation unit and resource unit that realize the task to be processed; then, analyze the execution between the operation units sequence, and the input/output relationship between the operation unit and the resource unit; based on this, the operation unit of the task to be processed is connected to the resource unit, and a panorama including the processing flow of the task to be processed is constructed; then, based on the Panorama, you can quickly process the tasks to be processed.
  • different operating units can be quickly connected in series, so as to achieve the effect of building the entire processing flow as a whole, so as to effectively solve the tasks to be processed in complex scenes.
  • FIG. 1A is a schematic diagram of a system architecture to which a task processing method according to an embodiment of the present disclosure can be applied;
  • FIG. 1B is a schematic diagram of the implementation flow of the task processing method provided by the embodiment of the present disclosure.
  • FIG. 2 is a schematic flowchart of another implementation of the task processing method provided by the embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of the composition and structure of a panorama translator provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an implementation flow of a panorama training map provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of the structure and composition of a task processing device according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of the composition and structure of a computer device according to an embodiment of the present disclosure.
  • first ⁇ second ⁇ third is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second ⁇ third” Where permitted, the specific order or sequencing may be interchanged such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein.
  • Concatenation in the embodiment of the present disclosure, refers to connecting the processing flows of different tasks together.
  • Application container engine which is an open source application container engine, allows developers to package their applications and dependent packages into a portable image, and then publish them to any popular operating system machine, and also realize virtual change.
  • Containers use a sandbox mechanism completely, and there will be no interfaces between them.
  • the exemplary application of the device provided by the embodiment of the present disclosure is described below.
  • the device provided by the embodiment of the present disclosure can be implemented as a notebook computer, a tablet computer, a desktop computer, a mobile device (for example, a personal digital assistant, a dedicated messaging device) with data processing functions.
  • a mobile device for example, a personal digital assistant, a dedicated messaging device
  • Various types of user terminals such as devices, portable game devices, etc., can also be implemented as servers.
  • an exemplary application when the device is implemented as a terminal or a server will be described.
  • An embodiment of the present disclosure provides a task processing method, which can be applied to a computer device, and the functions implemented by the method can be realized by calling a program code by a processor in the computer device, and of course the program code can be stored in a computer storage medium.
  • the computer device includes at least a processor and a storage medium.
  • Figure 1A is a schematic diagram of a system architecture that can be applied to the task processing method of the embodiment of the present disclosure; as shown in Figure 1A, the system architecture includes: a task acquisition terminal 11, a network 12 and a task processing terminal 13 .
  • the task acquisition terminal 11 and the task processing terminal 13 can establish a communication connection through the network 12 , and the task acquisition terminal 11 reports the obtained tasks to be processed to the task processing terminal 13 through the network 12 .
  • the task processing terminal 13 acquires tasks to be processed autonomously.
  • the task processing terminal 13 After the task processing terminal 13 receives the task to be processed, it first determines the operation unit and resource unit that realize the task to be processed; then, builds a panorama based on this; finally, the task processing terminal 13 uses the panorama to process the task to be processed, And the processing result is sent to the task acquisition terminal 11 through the network 12 . In this way, tasks in complex scenes can be quickly solved.
  • the task acquisition terminal 11 may include an image acquisition device, and the task processing terminal 13 may include a processing device with information processing capabilities or a remote server.
  • the network 12 may adopt a wired connection or a wireless connection.
  • the task acquisition terminal 11 can communicate with the processing device through a wired connection, such as performing data communication through a bus; when the task processing terminal 13 is a remote server, the task acquisition terminal 11 can be connected via The wireless network exchanges data with the remote server.
  • the task processing method in the embodiment of the present disclosure may be executed by the task processing terminal 13 , and the above system architecture may not include the network and the task acquisition terminal 11 .
  • An embodiment of the present disclosure provides a task processing method, which is executed by an electronic device, as shown in Figure 1B, and will be described in conjunction with the steps shown in Figure 1B:
  • Step S101 acquiring tasks to be processed.
  • the task to be processed may be a data processing task of any complex scene, which needs to be realized by combining multiple different algorithm modules.
  • the task to be processed can be the task of image recognition for images in complex scenes; for example, in an industrial production scene, the task to be processed is the identification of defects in some parts, and the processing task can be a picture of the part, The picture information has a very complicated background.
  • the task to be processed may be the classification and identification of ships at sea.
  • the task to be processed can be actively acquired by the electronic device.
  • the task to be processed is to identify the defect of parts in the image in the industrial production scene.
  • the task to be processed includes the image collected by the image collector, and it can also be other The image sent by the device.
  • Step S102 determining an operation unit and a resource unit for realizing the task to be processed.
  • the operation unit at least includes performing a processing operation on the task to be processed, and the resource unit includes data input and/or output by the operation unit during execution of the processing operation.
  • the algorithm module and data processing module required to realize the task to be processed through analysis; each operation unit is a virtualization node after encapsulating an algorithm module; each resource unit is a virtualization node after encapsulating a data processing module , and the data processing module provides input data for a certain algorithm module, or processes output data of another algorithm module.
  • a resource unit is an input of an operation unit; in some implementations, a resource unit may be an input or an output of an operation unit, or a resource unit is both an output and an output of a previous operation unit Input for the next operating unit.
  • the algorithm module required to realize the task to be processed that is, the operation unit includes an image detection operation unit and a classification operation unit; the corresponding resource unit is the one involved in the detection and classification process specific data.
  • the detection operation unit and the data involved in the detection operation unit, and the sequence of the classification operation unit and the data involved in the classification operation unit in the process of processing the task to be processed are used as the sequence between the operation unit and the resource unit Association relationship; according to the association relationship, multiple operation units and multiple resource units are connected together to form a panorama for realizing the component defect identification task.
  • the task to be processed may be set by the user, or obtained from the background, and the functional module may be an operation unit and resource unit.
  • Step S103 based on the operation unit of the task to be processed and the resource unit, construct a panorama including the processing flow of the task to be processed.
  • the panorama includes a training panorama and/or an inference panorama
  • the panorama may be formed by dragging on the front-end canvas.
  • the operation units and resource units included in the front-end panorama file determine the operation units and resource units that realize the task to be processed; and determine the number of operation units and resource units according to the execution sequence between the operation units and resource units connections between.
  • the connection relationship on the canvas at the front end, multiple operation units and resource units are connected by dragging and dropping operations to form a panorama.
  • Panorama is a complete solution for the generation of artificial intelligence models built by users on the canvas, including functions such as model training, evaluation, and reasoning logic series.
  • the canvas is a section where users drag and drop different components on the artificial intelligence training platform to build the whole process of model production.
  • Step S104 based on the panorama, process the task to be processed to obtain a processing result.
  • the training can be obtained
  • the model is imported into the inference stage of Panorama, so that the workflow of the inference stage is used to process the task to be processed, and the processing result is obtained.
  • the task to be processed is a defect recognition task for parts in the image.
  • the panorama includes a training phase and an inference phase.
  • the detection model and classification model in the panorama are trained; the trained detection model And the trained classification model is applied to the inference stage of the panorama; in the inference stage, the trained detection model is used to detect the image, and based on the detection result, the trained classification model is used to identify, thereby completing the detection of the image. Defect identification of parts.
  • the acquired task to be processed firstly, analyze the operation unit and resource unit that realize the task to be processed; Output relationship; based on this, the operation unit of the task to be processed is connected with the resource unit, and a panorama including the processing flow of the task to be processed is constructed; finally, based on the panorama, rapid processing of the task to be processed can be realized .
  • different operating units can be quickly connected in series, so as to achieve the effect of building the entire processing flow as a whole, so as to effectively solve the tasks to be processed in complex scenes.
  • the panorama includes a training panorama
  • the front-end canvas according to the execution order of the operation units, different operations
  • the unit is connected with the resource unit to form a training panorama, that is, the above step S103 can be realized by the following steps shown in Figure 2:
  • Step S201 among at least two resource units, determine a first resource unit as an input of each operation unit and a second resource unit as an output of each operation unit.
  • each operation unit may be relatively important modules in the process of realizing the task to be processed, and may also be all algorithm modules. For example, since the functions of multiple operating units overlap, only one operating unit that implements the function may be reserved, thereby reducing the number of modules and improving the efficiency of creating panoramas.
  • each operation unit is connected to at least one resource unit. If an operation unit has both input and output, then its input and output are both resource units; if an operation unit only has input, then its input is a resource unit; if an operation unit has only output, then its input is a resource unit. For each operation unit, the data output by the operation unit and the data input to the operation unit are analyzed, that is, the first resource unit and the second resource unit.
  • Step S202 connecting each operation unit with the corresponding first resource unit and second resource unit to obtain the training panorama.
  • the contextual relationship of the execution order of multiple operating units is analyzed; through the contextual relationship of the execution order, the execution order among the multiple operating units can be determined, so that Determine the positions of the first resource unit and the second resource unit as the input and output of the operation unit in the training panorama.
  • these multiple operation units and corresponding resource units are built to form a training panorama of the entire process of executing and processing pending tasks.
  • the foreground map may be constructed by the user by dragging and dropping multiple functional modules on the front-end interface, or may be formed automatically based on the connection relationship.
  • the panorama may only include the training panorama, after performing the above step S202, enter step S104a, and process the task to be processed based on the training panorama , to obtain the processing result.
  • the connection order between the multiple operation units and the corresponding resource units can be accurately determined, and according to the execution order of different operation units, different operation units and resources Units can be connected in series, which can quickly and conveniently connect multiple operation units and resource units, and build a panorama of reasoning including full-chain algorithm solutions.
  • the first inference panorama may be a panorama built at the front end without workflow data, that is, the above step S103 may also be performed through the following two steps: Ways to achieve:
  • Step S131 determining a target operation unit and a target resource unit that match the processing flow of the task to be processed in the front-end panorama file.
  • the front-end panorama file includes at least two operation units and at least two resource units.
  • the front-end panorama file also includes the connection relationship between the operation unit and the resource unit, such as a connection line (link).
  • the operation unit may include operation units such as training, reasoning, and evaluation of the corresponding algorithm module, and may also include the name of the resource unit connected to the operation unit.
  • the resource unit may include data entities in the process of model training or reasoning, and may also include data set interface functions, formats of input and output data, image sizes, and so on.
  • the task to be processed is a defect identification task
  • the operation units included are: the operation unit for realizing the data input function, the operation unit for realizing the labeling function of the detection data set, the operation unit for realizing the target map function, and the operation unit for realizing the detection model training function , the operation unit that realizes the function of result conversion data, the operation unit that realizes the classification model training function, etc., uses Docker virtual service technology to divide the algorithm module into each single-point operation unit, and encapsulates each one into a Docker virtual image as a panorama
  • the virtualization nodes in the container; that is, these operation units are encapsulated into Docker virtualization nodes, so as to obtain data set nodes, detection and labeling nodes, detection model training nodes, result conversion data nodes, classification model training nodes, etc. Since there are repeated data set nodes between the detection operation unit and the classification operation unit, only one data set node can be reserved in the finally obtained multiple virtual nodes.
  • Step S132 based on the target operation unit and the target resource unit, construct the first reasoning panorama that does not include workflow data.
  • connection relationship between the subunits in different functional operation units can be determined, so that the connection relationship between the virtualization nodes corresponding to each sub-function module can be determined , based on this, multiple virtualization nodes are connected in series to form an inference panorama that can complete and realize the processing of tasks to be processed.
  • At least two operating units include: a detection operation unit and a classification operation unit, because in the process of defect recognition, the detection operation unit and the classification operation unit
  • the execution sequence is to detect the image first, and then classify based on the detection results, that is, the detection operation unit comes first, and the classification operation unit follows; based on this, the detection operation unit, classification operation unit and corresponding resource units are connected in series, Get the inference panorama.
  • the product-level task implementation process can be connected in series, and the first inference panorama including the entire processing process can be constructed more efficiently picture.
  • Step S133 in the training panorama of the panorama, determine a target operation unit and a target resource unit that match the processing flow of the task to be processed.
  • the panorama includes a training panorama built at the front end, and the training panorama includes a model to be trained and sample data for implementing tasks to be processed.
  • the operation unit and resource unit that can be applied to the inference stage can be selected in the training panorama; for example, the task to be processed is defect recognition of a device as an example; the training panorama includes : the training sample set, the detection model to be trained, the classification model to be trained, etc.; in the training panorama, select the detection model to be trained and the classification model to be trained to be applied in the inference phase.
  • Step S134 based on the target operation unit and the target resource unit, construct the first inference panorama that does not include workflow data.
  • the target operation unit and target resource unit selected in the inference panorama and the connection relationship between the target operation unit and the target resource unit in the training panorama, on the front-end canvas, connect the target operation unit and the target resource unit in series to obtain the future First inference panorama containing workflow data.
  • the first inference panorama can be built more quickly and conveniently.
  • a trained model that can be applied to the reasoning phase is obtained, that is, after step S202, the following steps are also included:
  • the first step is to train the operation units including the model to be trained in the training panorama.
  • the front-end training panorama is translated into the back-end training workflow, and the training process of the operation unit including the model to be trained can be realized through the following process:
  • the training panorama at the front end is converted into a training intermediate result image at the back end.
  • the training intermediate result graph is stored in the form of an intermediate file.
  • each operation unit has an input relationship and/or an output relationship with each operation unit.
  • a possible implementation is, for all the operation units in the training panorama file, merge the input resource unit or output resource unit of each operation unit into the corresponding operation unit;
  • the connection relationship between determine the connection relationship between two operation units that have an input and output relationship with the same resource unit, save the connection relationship between all operation units and every two operation units in the file of the training panorama, and obtain the converted the intermediate file.
  • the converted intermediate file can conveniently store the content of the training panorama and provide support for subsequent conversion of other functional maps.
  • a possible implementation is that, for all operation units in the training panorama, the input resource unit or output resource unit of each operation unit is merged into the corresponding operation unit; at the same time, based on the The connection relationship between two operation units that have an input and output relationship with the same resource unit is determined, and the connection relationship is also incorporated into the attributes of the corresponding operation unit, and the attributes of all operation units are directly stored to obtain the converted Intermediate file.
  • the converted intermediate file can conveniently store the content of the training panorama, can be connected to the training panorama and can meet the needs of conversion into other pictures, and solves the problem of difficult conversion and translation from the training panorama to the workflow diagram that can be run at the back end .
  • a first running graph with a starting point is constructed.
  • the preset diagram template corresponding to each operation unit is set based on the task at the front end; the starting point of the first operation diagram is any operation unit in the training intermediate result diagram.
  • the intermediate result graph is a directed acyclic graph (Directed Acyclic Graph, DAG), indicating that all operation units in the intermediate result graph complete a part of the entire task, and each operation unit satisfies the constraints of a specific execution order, some of which The start of an operation unit must be after some other operation units have finished executing. In this way, it can be confirmed that the task composed of all the operation units can be smoothly carried out within an effective time.
  • the starting point of the first running diagram may be set according to the required training task; for example, the required training task is to train the detection model, and the starting point is the input node of the sample data set.
  • the first operation graph is converted into a training workflow capable of training functions of the first operation graph.
  • the training panorama includes a detection model to be trained and a classification model to be trained, and the data for training the classification model depends on the inference result of the detection model. Therefore, when the task to be processed is a defect recognition task, the front end of the model training platform reserves the detection training workflow template related to the object detection model, the detection evaluation workflow template, and the detection training workflow template related to the image classification model, and the detection Evaluation workflow template.
  • the operation unit of the model to be trained is trained.
  • the operation units in the first operation diagram include: the detection data set labeling unit and the detection model training unit, etc., according to the sequential relationship between the detection data set labeling unit and the detection model training unit in the training process, based on the detection
  • the data set labeling unit labels the sample data set; then, uses the labeled sample data set to train the detection model to be trained in the detection model training unit, and then realizes the training of the operation unit of the model to be trained.
  • the training of the model to be trained in the operating units can be realized more accurately and reasonably.
  • the second step is to determine a second inference panorama for inferring the task to be processed based on the operating unit including the trained model in the training panorama and the first inference panorama.
  • the trained model is obtained by training the model to be trained. After the training of the model to be trained in the training panorama is completed, the trained model can be directly applied to the first inference panorama, so that the first inference panorama includes workflow data for inference, so as to realize the task to be processed reasoning. In this way, after the training is completed and the corresponding different module models are obtained, they can be directly imported into the inference graph for inference and use, which can quickly realize the processing of tasks to be processed in complex scenarios.
  • the operation unit that can process the task to be processed can be selected in the trained training panorama and resource units, based on such operation units and resource units, a second reasoning panorama is formed.
  • the training process of the model to be trained in the training panorama is realized by translating into a training workflow; the trained model is imported into the front-end
  • the first inference panorama that has been built but has no data can be obtained in the second inference panorama that can reason about the task to be processed; it can be achieved by the following steps:
  • the first reasoning panorama determine a target operating unit that matches the operating unit in the training panorama that includes the trained model.
  • the operation unit corresponding to the trained model is determined; then, from the first reasoning panorama, the operation unit that does not include workflow data is determined. For example, in the first inference panorama, a target operating unit including a detection model to be trained is determined.
  • the trained model is imported into the matched target operation unit to obtain the second inference panorama.
  • the workflow data of the trained model is imported into the first inference panorama to obtain a second inference panorama capable of processing tasks to be processed.
  • the detection model trained in the training panorama image is imported into the operation unit of the first inference image for inference use. In this way, after the training is completed and the corresponding different models are obtained, they can be directly imported into the inference graph for inference use, which improves the speed of building the entire processing flow.
  • the task to be processed is processed using the second inference panorama, that is, the above step S104 can be implemented through the following steps S141 and S142 (not shown in the figure):
  • Step S141 input the task to be processed into a second inference panorama in the panorama.
  • the trained model is included in the second inference panorama; the task to be processed is input into the second inference panorama that includes the trained model in the operation unit, so as to realize by converting the second inference panorama into workflow data The processing process of the task to be processed.
  • Step S142 based on the second inference panorama, process the task to be processed to obtain the processing result.
  • this inference workflow can be used to implement the to-be-processed Task processing.
  • this inference workflow can be directly called by the back-end task scheduling tool.
  • processing of tasks to be processed may be implemented through the following steps:
  • the second inference panorama at the front end is converted into an intermediate inference result graph at the back end.
  • the implementation process of converting the second inference panorama of the front end into the intermediate inference result graph of the back end is the same as the implementation process of converting the training panorama of the front end into the intermediate training result graph of the back end. That is, a panorama translator is used to convert the second inference panorama into a back-end inference intermediate result map.
  • the second step is to construct a second operation graph with a starting point based on the preset graph template corresponding to each operation unit in the inference intermediate result graph.
  • the starting point of the second operation graph is any operation unit in the inference intermediate result graph.
  • the selection of the starting point of the second running graph depends on the task to be processed; for example, if the task to be processed is a detection task, then the starting point of the second running graph is the detection model.
  • a detection workflow template related to the detection model can be obtained, and according to the detection workflow template, a second operation graph capable of running at the backend is constructed.
  • the second operation graph is converted into an inference workflow.
  • the inference converter is used to convert the second running graph into an inference workflow, so as to obtain workflow data for processing tasks to be processed.
  • the fourth step is to use the reasoning workflow to process the task to be processed to obtain the processing result.
  • the converted inference workflow is used to process the tasks to be processed and obtain the processing results.
  • the back-end scheduling tool realizes the processing of pending tasks by invoking the converted inference workflow. In this way, by translating the front-end inference graph into an inference workflow and seamlessly connecting nodes with different functions, the inference function of the entire processing process can be completed.
  • the operation unit when the task to be processed is a classification recognition task, includes: a detection data set labeling unit, a matting unit, a detection unit, a classification data set labeling unit, and a classification unit; the resource unit Including: the data input and/or output by the detection data set labeling unit during the labeling operation, the data input and/or output by the map matting unit during the map matting operation, and the detection unit performing the detection The data input and/or output during the operation process, the data input and/or output by the classification dataset labeling unit during the execution of the labeling operation, and the data input and/or output by the classification unit during the execution of the classification operation. Based on this, when the task to be processed is a classification recognition task, the process of processing the classification recognition task is as follows:
  • the first step is to build a training panorama and a first inference panorama at the front end based on the correspondence between operation units and resource units.
  • Docker virtual service technology is used to encapsulate these operation units into virtualized nodes, including: detection data set labeling node, image matting node, detection node, classification data set labeling node and classification node.
  • the operation unit and resource unit are packaged as virtualized nodes, so as to facilitate the construction of a panorama of the entire solution process.
  • the second step is to train the model to be trained in the training panorama.
  • the detection node set and classification node set obtained by encapsulating the first and second steps, among these nodes, the nodes used to train the detection model and classification model are selected; the data set node is used , the detection label node, and the detection model training node to train the detection model to be trained, and to train the classification model to be trained by using the cutout node, the classification label node and the classification model training node.
  • the training of the detection model to be trained and the classification model to be trained can be realized, thereby completing the training nodes in the panorama.
  • the third step is to import the target operating unit in the training panorama that matches the operating unit of the trained model into the first inference panorama to obtain a second inference panorama.
  • the connection relationship between the training nodes is determined, that is, the data set node, the detection label node, the detection model training node, the cutout node, the classification label node and the Classification models train the connections between nodes.
  • the connection relationship between the data set node, the detection label node, the detection model training node, the cutout node, the classification label node and the classification model training node are as follows from top to bottom: data set node, detection label node, Detect model training nodes, cutout nodes, classification labeling nodes, and classification model training nodes.
  • the data set node, the detection label node, the detection model training node, the cutout node, the classification label node and the classification model training node are sequentially connected in series , as shown in FIG. 4 , from the D1 data set to the classification model training node 407 .
  • the training of the model to be trained is realized, so as to obtain a trained training graph, and apply the trained training graph to In the inference graph, the construction of the entire processing flow is realized.
  • the fourth step is to use the second reasoning panorama to process the classification recognition task.
  • the inference panorama including the trained detection model and the trained classification model is translated into an inference workflow, and the classification and recognition tasks are processed, and the whole While planning the processing flow, it can also efficiently realize the processing of tasks.
  • Deep learning algorithms have made great progress in various fields, and have also achieved landing in many industrial fields.
  • the algorithm solution usually needs to be adapted to the concatenation of multiple different module algorithms and fusion.
  • face recognition it usually needs to include a face detection module, a face key point module, a face quality module, a living body module, and a face feature module.
  • Algorithmic solutions in other fields also require the combined use of multiple algorithm modules.
  • this method is called panorama.
  • different algorithm modules are called algorithm nodes in the panorama, and different data processing modules are packaged as virtualized nodes.
  • Different functions The connections between modules are called edges in the panorama.
  • different algorithm modules can be quickly connected in series to achieve the effect of building the entire algorithm scheme as a whole, and build a special panorama task processing flow for different complex scenarios.
  • FIG. 3 is a schematic diagram of the composition and structure of the panorama translator provided by the embodiment of the present disclosure. The following description is made in conjunction with FIG. 3:
  • the front-end 300 includes: a front-end panorama 301 , which represents a diagram of the entire task processing flow constructed by dragging and dropping at the front-end.
  • the backend 302 includes: an intermediate result converter 321, a diagram template 322, a construction operation diagram 323 according to the starting point, a training converter 324, an inference converter 325, a training workflow 326 and an inference workflow 327; wherein:
  • the intermediate result converter 321 is an intermediate result graph (inter graph), which is the storage form of the user front-end display graph, including nodes (nodes), operation units (ops), and connecting lines (links), and there are mainly operations in the intermediate result graphs unit and includes multiple modules for processing tasks.
  • Inter graph is the storage form of the user front-end display graph, including nodes (nodes), operation units (ops), and connecting lines (links), and there are mainly operations in the intermediate result graphs unit and includes multiple modules for processing tasks.
  • Translate the front-end panorama into an intermediate result for example, translate the front-end panorama into a structure of connected data, which is used to describe the functions of each module.
  • the diagram template 322 indicates that each functional module corresponds to a different configuration, and the parameters required by each functional module are configured in the intermediate result diagram, so that each functional module corresponds to a different configuration, and the diagram template 322 is obtained, that is, different Function modules correspond to different diagram templates.
  • the training converter 324 is configured to train the template for completing the task based on the constructed running graph, and generate a training workflow 326 .
  • the inference converter 325 is configured to operate the task based on the trained model and form an inference workflow 327 .
  • FIG. 4 is a schematic diagram of the implementation process of the panorama training image provided by the embodiment of the present disclosure. The following description is made in conjunction with FIG. 4:
  • the panorama training graph 401 includes data set nodes, data set label nodes, image matting nodes, model training nodes, etc., wherein:
  • the detection data set labeling node 402 is input as a data set (for example, data set D1), which is used to label the data set for detection tasks and classification tasks, and the output is the data set and the corresponding label file; the output D2 data set is in the D1 data set
  • the annotation information of the detection task is added, including the detection frame (bounding box, bbox) and label (label) on each image.
  • Image matting node 403 the input is a data set and annotated file (for example, data set D2), which is used to process the image according to the marked bbox, and the output is a new data set D3;
  • data set D2 annotated file
  • the data set after the specific parts are matted is used to perform downstream classification tasks.
  • the classification data set labeling node 404 the input is the classification data set D3, which is used to perform the classification task labeling function, and the output is the data set D4; the data set D4 is based on the D3 data set, which adds classification and labeling information, and is used for training the classification model use.
  • the detection model training node 405 is input with the detection data set D2 and labels for training the corresponding detection model, and the output is the M1 detection model 406 .
  • the classification model training node 407 the input is the classification data set D4 and the label used to train the corresponding classification model, and the output is the M2 classification model 408.
  • the panorama inference graph 411 includes: data set nodes, inference nodes , the result is transferred to the dataset node and image cutout node, etc., among which:
  • the detection reasoning node 412 is used to indicate the input of data, and the description of the data set and the corresponding location path are filled in this node, where the data set D5 is the data set to be tested.
  • the input is the data set D5 and the M1 detection model 406, the specific M1 detection model is used to perform the reasoning function, and the output is the specific detection result of the input data set; that is, the specific detection result of the data set D5.
  • the result transfer data set node 413 the input is the inference result, which is used to convert the inference result to meet the data input of the downstream task, and the output is the data set D6 conforming to the format of the downstream task.
  • the data set D6 is the data satisfying the downstream classification task set.
  • the image matting node 414 the input is the result data set obtained by the detection reasoning, which is used to perform matting processing on the picture according to the predicted bbox, and the output is the data set D7 to be classified after matting.
  • the classification reasoning node 415 the input is the data set D7 to be classified and the M2 classification model 408, which is used to perform the classification reasoning function, and the output is the final classification result.
  • the user can build a complete panorama, including a component detection algorithm module, a picture matting processing module, different component classification algorithm modules, and a reasoning module. training and inference workflow.
  • the panorama is based on the concept of a graph, and can quickly and conveniently build a complete task processing flow for complex industrial scenarios; then, the different algorithm modules and data processing modules in the entire algorithm chain flow Encapsulated into a Docker image as a node in the graph; finally, the panorama is based on the translator, which translates the front-end panorama into the corresponding training workflow and inference workflow, and k8s is used in the workflow to schedule different images to complete the entire process; thus, Realize the seamless connection of nodes with different functions, and complete the training and reasoning functions of the whole solution.
  • FIG. 5 is a schematic diagram of the structure and composition of the task processing device in an embodiment of the present disclosure.
  • the task processing device 500 includes:
  • the first obtaining module 501 is configured to obtain tasks to be processed
  • the first determination module 502 is configured to determine an operation unit and a resource unit that realize the task to be processed; wherein the operation unit at least includes performing a processing operation on the task to be processed, and the resource unit includes the operation unit in the data input and/or output during the execution of said processing operations;
  • the first construction module 503 is configured to construct a panorama including the processing flow of the task to be processed based on the operation unit of the task to be processed and the resource unit;
  • the first processing module 504 is configured to process the task to be processed based on the panorama to obtain a processing result.
  • the panorama includes a training panorama
  • the first building block 503 includes:
  • the first determining submodule is configured to, among at least two resource units, determine a first resource unit as an input of each operation unit and a second resource unit as an output of each operation unit;
  • the first connection submodule is configured to connect each operation unit with the corresponding first resource unit and the second resource unit to obtain the training panorama.
  • the panorama includes a first reasoning panorama
  • the first building block 503 includes:
  • the second determining submodule is configured to determine in the front-end panorama file a target operation unit and a target resource unit that match the processing flow of the task to be processed; wherein, the front-end panorama file includes at least two operation units and at least two resource units;
  • the first construction submodule is configured to construct the first reasoning panorama that does not include workflow data based on the target operation unit and the target resource unit.
  • the panorama includes a first reasoning panorama
  • the first building block 503 includes:
  • the third determining submodule is configured to determine, in the training panorama of the panorama, a target operation unit and a target resource unit that match the processing flow of the task to be processed;
  • the second construction submodule is configured to construct the first reasoning panorama that does not include workflow data based on the target operation unit and the target resource unit.
  • the device also includes:
  • the first training module is configured to train the operation units including the model to be trained in the training panorama;
  • the second determination module is configured to determine a second inference panorama configured to perform inference on the task to be processed based on the operating unit including the trained model in the training panorama and the first inference panorama; wherein, The trained model is obtained by training the model to be trained.
  • the first training module includes:
  • the first conversion sub-module is configured to convert the training panorama at the front end into a training intermediate result image at the back end;
  • the third construction sub-module is configured to construct a first operation diagram with a starting point based on the preset diagram template corresponding to each operation unit in the training intermediate result diagram; wherein, the preset corresponding to each operation unit
  • the graph template is set based on the task at the front end; the starting point of the first running graph is any operation unit in the training intermediate result graph;
  • the second conversion sub-module is configured to convert the first operation diagram into a training workflow capable of training the functions of the first operation diagram
  • the first training submodule is configured to train the operation unit of the model to be trained based on the training workflow.
  • the first training submodule includes:
  • a first determination unit configured to determine a logical relationship between different operation units in the first operation diagram based on the training workflow
  • the first training unit is configured to train the operation unit of the model to be trained according to the logical relationship.
  • the second determination module includes:
  • the fourth determining submodule is configured to determine, in the first inference panorama, a target operating unit that matches an operating unit in the training panorama that includes a trained model;
  • the first import submodule is configured to import the trained model into the matched target operation unit to obtain the second inference panorama.
  • the first processing module 504 includes:
  • a first input submodule configured to input the task to be processed into a second reasoning panorama in the panorama
  • the first processing submodule is configured to process the task to be processed based on the second inference panorama to obtain the processing result.
  • the first processing submodule includes:
  • the first conversion unit is configured to convert the second inference panorama at the front end into an intermediate inference result graph at the back end;
  • the first construction unit is configured to construct a second running graph with a starting point based on a preset graph template corresponding to each operation unit in the inference intermediate result graph; wherein, the starting point of the second running graph is the Any operation unit in the inference intermediate result graph;
  • a second conversion unit configured to convert the second running graph into an inference workflow
  • the first processing unit is configured to use the reasoning workflow to process the task to be processed to obtain the processing result.
  • the operation unit at least includes: a detection data set labeling unit, a matting unit, a detection unit, a classification data set labeling unit, and a classification unit;
  • the resource unit at least includes: the data input and/or output by the detection data set labeling unit during the labeling operation, the data input and/or output by the matting unit during the matting operation, the The data input and/or output by the detection unit during the detection operation, the data input and/or output by the classification dataset labeling unit during the labeling operation, and the classification unit input and/or output during the classification operation or output data.
  • the above task processing method is realized in the form of software function modules and sold or used as an independent product, it can also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium, including several instructions for A computer device (which may be a terminal, a server, etc.) is made to execute all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: various media that can store program codes such as U disk, sports hard disk, read-only memory (Read Only Memory, ROM), magnetic disk or optical disk.
  • embodiments of the present disclosure are not limited to any specific combination of hardware and software.
  • the embodiments of the present disclosure further provide a computer program product, where the computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the steps in the task processing method provided by the embodiments of the present disclosure can be implemented.
  • the embodiments of the present disclosure further provide a computer storage medium, where computer executable instructions are stored on the computer storage medium, and when the computer executable instructions are executed by a processor, the steps of the task processing method provided in the foregoing embodiments are implemented.
  • FIG. 6 is a schematic diagram of the composition and structure of a computer device in an embodiment of the present disclosure.
  • the computer device 600 includes: a processor 601, at least one communication bus, and a communication interface 602 , at least one external communication interface and memory 603 .
  • the communication interface 602 is configured to realize connection and communication between these components.
  • the communication interface 602 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface.
  • the processor 601 is configured to execute the image processing program in the memory, so as to realize the steps of the task processing method provided in the above embodiments.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may be used as a single unit, or two or more units may be integrated into one unit; the above-mentioned integration
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • the above-mentioned integrated units of the present disclosure are realized in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium, including several instructions for Make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks.
  • the present disclosure provides a task processing method and device, an electronic device, and a storage medium; the method includes: acquiring a task to be processed; determining an operation unit and a resource unit for realizing the task to be processed; wherein the operation unit includes at least a pair of The task to be processed performs a processing operation, and the resource unit includes data input and/or output by the operation unit during execution of the processing operation; based on the operation unit of the task to be processed and the resource unit, construct A panorama of the processing flow of the task to be processed is included; based on the panorama, the task to be processed is processed to obtain a processing result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种任务处理方法及装置、电子设备和存储介质,该方法包括:获取待处理任务(S101);确定实现所述待处理任务的操作单元和资源单元(S102);其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图(S103);基于所述全景图,对所述待处理任务进行处理,得到处理结果(S104)。通过该方法,能够快速解决复杂场景中的任务。

Description

任务处理方法及装置、电子设备和存储介质
相关申请的交叉引用
本专利申请要求2021年05月25日提交的中国专利申请号为202110570235.4、申请人为上海商汤智能科技有限公司,申请名称“任务处理方法及装置、电子设备和存储介质”的优先权,该申请的全文以引用的方式并入本公开中。
技术领域
本公开实施例涉及图像处理技术领域,涉及但不限于一种任务处理方法及装置、电子设备和存储介质。
背景技术
在计算机视觉领域中,相关技术采用单个算法模块的技术解决现实场景中的问题。但是在复杂场景下,由于其整个流程(pipeline)的复杂性、多样化、多模块、多模态等问题,所以场景中的问题并不能转化为单一的基本任务,所以采用相关技术中的单模块的算法无法有效解决复杂场景中的问题。
发明内容
本公开实施例提供一种任务处理技术方案。
本公开实施例的技术方案是这样实现的:
本公开实施例提供一种任务处理方法,所述方法包括:
获取待处理任务;
确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;
基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;
基于所述全景图,对所述待处理任务进行处理,得到处理结果。
在一些实施例中,所述操作单元和所述资源单元均为至少两个,所述全景图包括训练全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:在至少两个资源单元中,确定作为每一操作单元的输入的第一资源单元和作为所述每一操作单元的输出的第二资源单元;将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图。如此,能够快速、便利地连接多个操作单元和资源单元,构建包括全链条算法解决方案的推理全景图。
在一些实施例中,所述全景图包括第一推理全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:在前端全景图文件中确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;其中,所述前端全景图文件中包括至少两个操作单元和至少两个资源单元;基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。如此,将多个虚拟化节点进行串接,从而形成能够快速完成实现对待处理任务进行处理的推理全景图。
在一些实施例中,所述全景图包括第一推理全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:在所述全景图的训练全景图中,确定与所述待处理任务的处理流程相匹配的目标操作单元和目 标资源单元;基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。如此,通过在训练全景图中,选择用于目标操作单元和目标资源单元能够更加快速且便捷地搭建第一推理全景图。
在一些实施例中,将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图之后,所述方法还包括:对所述训练全景图中的包括待训练模型的操作单元进行训练;基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定用于对所述待处理任务进行推理的第二推理全景图;其中,所述已训练模型为对所述待训练模型进行训练得到的。如此,在训练完成得到对应的不同模块模型后,直接导入到推理图中进行推理使用,能够快速实现对复杂场景中待处理任务的处理。
在一些实施例中,所述对所述训练全景图中的包括待训练模型的操作单元进行训练,包括:将前端的所述训练全景图转换为后端的训练中间结果图;基于所述训练中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第一运行图;其中,每一所述操作单元对应的预设图模板为前端基于任务设定的;所述第一运行图的起始点为所述训练中间结果图中的任一操作单元;将所述第一运行图,转换为能够训练所述第一运行图的功能的训练工作流;基于所述训练工作流,对所述待训练模型的操作单元进行训练。如此,如此,基于有向无化图中的操作单元,结合各个操作单元对应的预设图模板,生成最终可被后端运行的工作流,可以实现复杂场景下多个模型的有序训练。
在一些实施例中,所述基于所述训练工作流,对所述待训练模型的操作单元进行训练,包括:基于所述训练工作流,确定所述第一运行图中的不同操作单元之间的逻辑关系;按照所述逻辑关系,对所述待训练模型的操作单元进行训练。如此,通过分析多个操作单元之间的逻辑关系,能够更加准确且合理的实现对操作单元中待训练模型的训练。
在一些实施例中,所述基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定用于对所述待处理任务进行推理的第二推理全景图,包括:在所述第一推理全景图中,确定与所述训练全景图中包括已训练模型的操作单元相匹配的目标操作单元;将所述已训练模型导入所述相匹配的目标操作单元,得到所述第二推理全景图。如此,训练完成得到不同的模型后,直接导入到推理图中进行推理使用,提高了搭建整个处理流程的速度。
在一些实施例中,所述基于所述全景图,对所述待处理任务进行处理,得到处理结果,包括:将所述待处理任务输入所述全景图中的第二推理全景图;基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果。如此,通过将已训练的操作单元输出的已训练模型,组装成能够实现整个处理流程的第二推理全景图,能够便于直接被后端的任务调度工具调用。
在一些实施例中,所述基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果,包括:将前端的所述第二推理全景图转换为后端的推理中间结果图;基于所述推理中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第二运行图;其中,所述第二运行图的起始点为所述推理中间结果图中的任一操作单元;将所述第二运行图转换为推理工作流;采用所述推理工作流,对所述待处理任务进行处理,得到所述处理结果。如此,通过将前端的推理图翻译成推理工作流,无缝连接各个不同功能的节点,能够完成整个处理过程的推理功能。
在一些实施例中,在所述待处理任务为分类识别任务的情况下,所述操作单元至少包括:检测数据集标注单元、抠图单元、检测单元、分类数据集标注单元和分类单元;所述资源单元至少包括:所述检测数据集标注单元在执行标注操作过程中输入和/或输出的数据,所述抠图单元在执行抠图操作过程中输入和/或输出的数据,所述检测单元在执行检测操作过程中输入和/或输出的数据,分类数据集标注单元在执行标注操作过程中输 入和/或输出的数据,以及,所述分类单元在执行分类操作过程中输入和/或输出的数据。如此,对分类识别任务进行处理,在快速且便捷的搭建整个方案处理流程的同时还能够高效的实现对任务的处理。
本公开实施例提供一种任务处理装置,所述装置包括:
第一获取模块,配置为获取待处理任务;
第一确定模块,配置为确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;
第一构建模块,配置为基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;
第一处理模块,配置为基于所述全景图,对所述待处理任务进行处理,得到处理结果。
本公开实施例提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现上述的任务处理方法。
本公开实施例提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时能够实现上述的任务处理方法。
本公开实施例提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现上述任意一项所述的任务处理方法。
本公开实施例提供一种任务处理方法及装置、电子设备和存储介质,对于获取的待处理任务,通过首先,分析实现待处理任务的操作单元和资源单元;然后,分析操作单元之间的执行顺序,以及操作单元和资源单元之间的输入/输出关系;基于此,将待处理任务的操作单元和所述资源单元进行连接,构建包括待处理任务的处理流程的全景图;然后,基于该全景图,即可实现对待处理任务进行的快速处理。如此,基于全景图的概念,能够快速串接不同的操作单元,从而达到整体构建整个处理流程的效果,以有效解决复杂场景中的待处理任务。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开实施例的实施例,并与说明书一起用于说明本公开实施例的技术方案。应当理解,以下附图仅示出了本公开实施例的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1A为可以应用本公开实施例的任务处理方法的一种系统架构示意图;
图1B为本公开实施例提供的任务处理方法的实现流程示意图;
图2为本公开实施例提供的任务处理方法的另一实现流程示意图;
图3为本公开实施例提供的全景图翻译器的组成结构示意图;
图4为本公开实施例提供的全景图训练图的实现流程示意图;
图5为本公开实施例任务处理装置的结构组成示意图;
图6为本公开实施例计算机设备的组成结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对发明的具体技术方案做进一步详细描述。以下实施例用于说明本公开,但不用来限制本公开的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本公开实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本公开的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本公开实施例的目的,不是旨在限制本公开。
对本公开实施例进行进一步详细说明之前,对本适用于如下的解公开实施例中涉及的名词和术语进行说明,本公开实施例中涉及的名词和术语释。
1)串接,在本公开实施例中是指将不同任务的处理流程连接在一起。
2)应用容器引擎(Docker),是一个开源的应用容器引擎,让开发者可以打包其应用以及依赖包到一个可移植的镜像中,然后发布到任何流行的操作系统机器上,也可以实现虚拟化。容器是完全使用沙箱机制,相互之间不会有任何接口。
下面说明本公开实施例提供的中设备的示例性应用,本公开实施例提供的设备可以实施为具有数据处理功能的笔记本电脑,平板电脑,台式计算机,移动设备(例如,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端,也可以实施为服务器。下面,将说明设备实施为终端或服务器时示例性应用。
本公开实施例提供一种任务处理方法,该方法可以应用于计算机设备,该方法所实现的功能可以通过计算机设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该计算机设备至少包括处理器和存储介质。
如图1A所示,图1A为可以应用本公开实施例的任务处理方法的一种系统架构示意图;如图1A所示,该系统架构中包括:任务获取终端11、网络12和任务处理终端13。为实现支撑一个示例性应用,任务获取终端11和任务处理终端13可以通过网络12建立通信连接,任务获取终端11通过网络12向任务处理终端13上报获取的待处理任务。或者任务处理终端13自主获取待处理任务。任务处理终端13接收到的待处理任务之后,首先确定确定实现待处理任务的操作单元和资源单元;然后,基于此构建全景图;最后,任务处理终端13采用该全景图对待处理任务进行处理,并将处理结果通过网络12发送给任务获取终端11。如此,能够快速解决复杂场景中的任务。
作为示例,任务获取终端11可以包括图像采集设备,任务处理终端13可以包括具有信息处理能力的处理设备或远程服务器。网络12可以采用有线连接或无线连接方式。其中,当任务处理终端为处理设备时,任务获取终端11可以通过有线连接的方式与处理设备通信连接,例如通过总线进行数据通信;当任务处理终端13为远程服务器时,任务获取终端11可以通过无线网络与远程服务器进行数据交互。
本公开实施例的任务处理方法可以由任务处理终端13执行,上述系统架构可以不包含网络和任务获取终端11。
下面,将结合本公开实施例提供的电子设备的示例性应用和实施,说明本公开实施例提供的任务处理方法。
本公开实施例提供一种任务处理方法,该方法由电子设备执行,如图1B所示,结合 如图1B所示步骤进行说明:
步骤S101,获取待处理任务。
在一些实施例中,待处理任务可以是任意复杂场景的数据处理任务,需要多个不同的算法模块相结合来实现。比如,工业生产、航空航海、农产品包装等领域相关的场景。待处理任务可以是对于复杂场景中的图像进行图像识别的任务;比如,在工业生产场景下,待处理任务为对某些零部件缺陷的识别,该带处理任务可以是该零部件的图片,该图片信息具有十分复杂的背景。或者,在航空航海场景下,待处理任务可以是对海上船只的分类和识别等。待处理任务可以是电子设备主动获取的,比如,待处理任务为对工业生产场景中的图像进行零部件缺陷的识别,该待处理任务包括采用图像采集器进行采集获取的图像,还可以是其他设备发送的图像。
步骤S102,确定实现所述待处理任务的操作单元和资源单元。
在一些实施例中,所述操作单元至少包括对所述待处理任务进行处理操作,资源单元包括操作单元在执行处理操作的过程中输入和/或输出的数据。通过分析实现待处理任务所需的算法模块和数据处理模块;每一操作单元为对一个算法模块进行封装后的虚拟化节点;每一资源单元为对一个数据处理模块进行封装后的虚拟化节点,且数据处理模块为某一算法模块提供输入数据,或者对另一算法模块的输出数据进行处理。在一些可能的实现方式中,资源单元为一操作单元的输入;在一些实施方式中,一资源单元可以是一操作单元的输入或输出,或者,一资源单元同时为上一操作单元的输出和下一操作单元的输入。
比如,待处理任务为零部件缺陷识别任务,那么实现待处理任务所需的算法模块,即操作单元包括对图像进行检测操作单元和分类操作单元;对应的资源单元为检测和分类过程中涉及到的具体数据。将检测操作单元以及检测操作单元涉及到的数据,和,分类操作单元以及分类操作单元中涉及到的数据,在对待处理任务进行处理的过程中的先后顺序,作为操作单元和资源单元之间的关联关系;按照该关联关系,将多个操作单元和多个资源单元连接在一起,形成实现零部件缺陷识别任务的全景图。
在一些可能的实现方式中,该待处理任务可以是用户设定的,还可以是从后台获取的,功能模块可以是用户基于该待处理任务,在前端界面通过拖拽操作选择的操作单元和资源单元。
步骤S103,基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图。
在一些实施例中,该全景图包括训练全景图和/或推理全景图,该全景图可以是在前端的画布上通过拖拽形成的。在前端的全景图文件包括的操作单元和资源单元中,确定实现待处理任务的操作单元和资源单元;并按照操作单元和资源单元之间的执行顺序,确定这多个操作单元和资源单元之间的连接关系。按照该连接关系,在前端的画布上,通过拖拽操作,将多个操作单元和资源单元进行连接,形成全景图。
全景图是用户在画布上构建的人工智能模型生成的完整解决方案,包括模型训练、评测、推理逻辑串联等功能。其中,画布是人工智能训练平台上用户拖拽不同组件,以构建模型生产全流程的版块。
步骤S104,基于所述全景图,对所述待处理任务进行处理,得到处理结果。
在一些实施例中,通过将前端的全景图同时翻译为训练和推理两个阶段的后端的工作流,基于训练阶段的工作流对进行任务处理网络模块进行训练完成后,可将训练完成得到的模型导入全景图的推理阶段,从而采用推理阶段的工作流对待处理任务进行处理,得到该处理结果。比如,待处理任务为对图像中的零部件的缺陷识别任务,该全景图包括训练阶段和推理阶段,在训练阶段中对全景图中的检测模型和分类模型进行训练;将已训练的检测模型和已训练的分类模型,应用于全景图的推理阶段;在推理阶段,采用 已训练的检测模型对图像进行检测,并基于检测结果,采用已训练的分类模型进行识别,从而完成对图像中的零部件的缺陷识别。
在本公开实施例中,对于获取的待处理任务,首先,分析实现待处理任务的操作单元和资源单元;然后,分析操作单元之间的执行顺序,以及操作单元和资源单元之间的输入/输出关系;基于此,将待处理任务的操作单元和所述资源单元进行连接,构建包括待处理任务的处理流程的全景图;最后,基于该全景图,即可实现对待处理任务进行的快速处理。如此,基于全景图的概念,能够快速串接不同的操作单元,从而达到整体构建整个处理流程的效果,以有效解决复杂场景中的待处理任务。
在一些实施例中,所述操作单元和所述资源单元均为至少两个,在该全景图包括训练全景图的情况下,在前端的画布上,按照操作单元的执行顺序,对不同的操作单元和所述资源单元进行连接,形成训练全景图,即上述步骤S103可以通过以下图2所示的步骤实现:
步骤S201,在至少两个资源单元中,确定作为每一操作单元的输入的第一资源单元和作为所述每一操作单元的输出的第二资源单元。
这多个操作单元可以是实现该待处理任务的过程中较为重要的模块,还可以是全部的算法模块。比如,由于多个操作单元之间的功能有重叠部分,那么可以仅保留一个实现该功能的操作单元,从而可以减少模块的数量,提高创建全景图的效率。在构建训练全景图的过程中,每一个操作单元均与至少一个资源单元连接,如果一个操作单元既有输入也有输出,那么其输入输出均为资源单元;如果一个操作单元仅输入,那么其输入为资源单元;如果一个操作单元仅有输出,那么其输入为资源单元。对于每一个操作单元,分析出该操作单元输出的数据以及输入该操作单元的数据,即第一资源单元和第二资源单元。
步骤S202,将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图。
在一些实施例中,在对待处理任务进行处理的过程中,分析多个操作单元的执行顺序的前后关系;通过该执行顺序的前后关系,可确定多个操作单元之间的执行顺序,从而可确定出作为该操作单元的输入和输出的第一资源单元和第二资源单元,在训练全景图中所处的位置。通过分析不同操作单元之间的连接关系,将这多个操作单元和对应的资源单元搭建形成执行处理待处理任务整个流程的训练全景图。该前景图可以是用户在前端界面通过拖拽多个功能模块搭建形成的,还可以是基于该连接关系,自动搭建形成的。在待处理任务为一个模型训练的任务的情况下,全景图可以是仅包括训练全景图,在执行上述步骤S202之后,进入步骤S104a,基于所述训练全景图,对所述待处理任务进行处理,得到所述处理结果。如此,通过按照多个操作单元的执行顺序,以此能够准确地确定其中的多个操作单元和对应的资源单元之间的连接顺序,按照不同操作单元的执行顺序,将不同的操作单元和资源单元进行串接,能够快速、便利地连接多个操作单元和资源单元,构建包括全链条算法解决方案的推理全景图。
在一些实施例中,在全景图包括第一推理全景图的情况下,该第一推理全景图可以是在前端搭建好的未包括工作流数据的全景图,即上述步骤S103还可以通过以下两种方式实现:
方式一:通过步骤S131和S132(图示未示出)实现:
步骤S131,在前端全景图文件中确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元。
在一些实施例中,前端全景图文件中包括至少两个操作单元和至少两个资源单元。前端全景图文件还包括操作单元和资源单元之间的连接关系,如连接线(link)。所述操作单元可以包括对应算法模块的训练、推理、评测等操作单元,也可以包括与该操作单 元连接的资源单元的名称。所述资源单元可以包括模型训练或推理过程中的数据实体,也可以包括数据集接口函数、输入输出数据的格式、图片尺寸等。首先,对于每一个待处理任务,用户在前端分析实现该待处理任务需要的目标操作单元和目标资源单元,然后,在画布上从前端全景图文件中拖拽出该目标操作单元和目标资源单元。
比如,待处理任务为缺陷识别任务,包括的操作单元为:实现的数据输入功能的操作单元、实现检测数据集的标注功能的操作单元、实现目标抠图功能、实现检测模型训练功能的操作单元、实现结果转换数据功能的操作单元、实现分类模型训练功能的操作单元等,采用Docker虚拟服务技术将算法模块划分为各个单点的操作单元,并且将每一个封装成Docker虚拟镜像,作为全景图中的虚拟化节点;即将这些操作单元封装为Docker虚拟化节点,从而得到数据集节点、检测标注节点、检测模型训练节、结果转换数据节点、分类模型训练节点等。由于检测操作单元和分类操作单元之间有重复的数据集节点,所以在最终得到的多个虚拟节点中仅保留一个数据集节点即可。
步骤S132,基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
在一些实施例中,按照不同操作单元之间的连接关系,可确定中不同功操作单元内的子单元之间的连接关系,从而可确定各个子功能模块对应的虚拟化节点之间的连接关系,基于此,将多个虚拟化节点进行串接,从而形成能够完成实现对待处理任务进行处理的推理全景图。
在一个具体例子中,以待处理任务为器件的缺陷识别为例,至少两个操作单元包括:检测操作单元和分类操作单元,由于在进行缺陷识别的过程中,检测操作单元和分类操作单元的执行顺序是先对图像进行检测,然后,基于检测结果进行分类,即检测操作单元在前,分类操作单元在后;基于此,将检测操作单元、分类操作单元以及对应的资源单元进行串接,得到该推理全景图。
在上述步骤S131和步骤S132中,通过从全景图文件中,选择目标操作单元和目标资源单元,从而能够串接产品级的任务实现流程,进而更加高效的构建包括整个处理流程的第一推理全景图。
方式二:通过步骤S133和S134(图示未示出)实现:
步骤S133,在所述全景图的训练全景图中,确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元。
该全景图包括在前端搭建的训练全景图,该训练全景图中包括实现待处理任务的待训练模型和样本数据。为得到能够对待处理任务进行处理的推理全景图,可以在训练全景图中选择可以应用于推理阶段的操作单元和资源单元;比如,待处理任务为器件的缺陷识别为例;训练全景图中包括:训练样本集、待训练的检测模型和待训练的分类模型等;在该训练全景图中,选择即将应用于推理阶段的待训练的检测模型和待训练的分类模型。
步骤S134,基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
按照在推理全景图中选择的目标操作单元和目标资源单元,以及目标操作单元和目标资源单元在训练全景图中的连接关系,在前端画布上,串接目标操作单元和目标资源单元,得到未包含工作流数据的第一推理全景图。这样,通过在训练全景图中,选择用于目标操作单元和目标资源单元能够更加快速且便捷地搭建第一推理全景图。
在一些实施例中,在前端搭建完整训练全景图之后,通过对训练全景图中的待训练模型进行训练,得到可以应用于推理阶段的已训练模型,即步骤S202之后,还包括以下步骤:
第一步,对所述训练全景图中的包括待训练模型的操作单元进行训练。
在一些可能的实现方式中,通过采用全景图翻译器,将前端的训练全景图翻译为后端的训练工作流,实现对包括待训练模型的操作单元的训练过程,可以通过以下过程实现:
首先,将前端的所述训练全景图转换为后端的训练中间结果图。
训练中间结果图以中间文件的形式进行存储,中间文件中每一操作单元、与每一所述操作单元具有输入关系和/或输出关系的资源单元。
一种可能的实现方式是,对于训练全景图的文件中的所有操作单元,将每一操作单元的输入资源单元或输出资源单元并入相应操作单元中;同时基于前端展示图中各个操作单元之间的连接关系,确定与同一资源单元有输入输出关系的两个操作单元之间的连接关系,保存训练全景图的文件中所有操作单元和每两个操作单元之间的连接关系,得到转换后的中间文件。这样,转换后的中间文件可以方便存储训练全景图的内容,并为后续其他功能图的转换提供支持。
一种可能的实现方式是,对于训练全景图中的所有操作单元,将每一操作单元的输入资源单元或输出资源单元并入相应的操作单元中;同时基于训练全景图中各个操作单元之间的连接关系,确定与同一资源单元有输入输出关系的两个操作单元之间的连接关系,将该连接关系也合入对应操作单元的属性中,直接存储所有操作单元的属性,得到转换后的中间文件。这样,转换后的中间文件可以方便存储训练全景图的内容,可以衔接训练全景图并且可以满足转换成其他图的需要,解决了训练全景图到后端可运行的工作流图难以转换翻译的问题。
其次,基于所述训练中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第一运行图。
这里,每一所述操作单元对应的预设图模板为前端基于任务设定的;所述第一运行图的起始点为所述训练中间结果图中的任一操作单元。中间结果图为有向无环图(Directed Acyclic Graph,DAG),表明所述中间结果图中的所有操作单元各自完成整个任务的一部分,且各操作单元之间满足特定执行顺序的约束,其中一些操作单元的开始必须是另一些操作单元执行完成之后。这样,能够确定由所有操作单元组成的任务能够在有效时间内顺利进行。第一运行图的起始点可以是依据需要进行的训练任务设定的;比如,需要进行的训练任务为对检测模型进行训练,起始点为样本数据集输入节点。
再次,将所述第一运行图,转换为能够训练所述第一运行图的功能的训练工作流。
以对于工业场景中的缺陷识别任务为例,用户需要先检测出部件,然后对部件中是否有缺陷分别进行分类。这样训练全景图中包括待训练的检测模型和待训练的分类模型,训练分类模型的数据依赖于检测模型的推理结果。从而针对待处理任务为缺陷识别任务的情况下,模型训练平台的前端预留物体检测模型相关的检测训练工作流模板、检测评估工作流模板,以及图像分类模型相关的检测训练工作流模板、检测评估工作流模板。
最后,基于所述训练工作流,对所述待训练模型的操作单元进行训练。
这里,在将前端的训练全景图进行翻译为后端的训练工作流之后,按照各个操作单元之间的逻辑关系,训练多个操作单元,这样每个操作单元输出对应的已训练模型。如此,基于有向无化图中的操作单元,结合各个操作单元对应的预设图模板,生成最终可被后端运行的工作流,可以实现复杂场景下多个模型的有序训练。
在一些可能的实现方式中,首先,基于所述训练工作流,确定所述第一运行图中的不同操作单元之间的逻辑关系。比如,第一运行图中的操作单元包括:检测数据集标注单元和检测模型训练单元等,按照检测数据集标注单元和检测模型训练单元之间在训练过程中的执行的先后顺序关系,基于检测数据集标注单元对样本数据集进行标注;然后,采用标注好的样本数据集,对检测模型训练单元中的待训练的检测模型进行训练,进而实现对待训练模型的操作单元的训练。如此,通过分析多个操作单元之间的逻辑关系, 能够更加准确且合理的实现对操作单元中待训练模型的训练。
第二步,基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定用于对所述待处理任务进行推理的第二推理全景图。
在一些可能的实现方式中,所述已训练模型为对所述待训练模型进行训练得到的。对训练全景图中的待训练模型训练完成之后,已训练模型可以直接应用于第一推理全景图中,这样第一推理全景图中便包括了用于推理的工作流数据,从而实现对待处理任务进行推理。如此,在训练完成得到对应的不同模块模型后,直接导入到推理图中进行推理使用,能够快速实现对复杂场景中待处理任务的处理。
在其他实施例中,如果在前端仅搭建了训练全景图,那么对该训练全景图中的待训练模型训练完成之后,可以在训练后的训练全景图中选择可以对待处理任务进行处理的操作单元和资源单元,基于这样的操作单元和资源单元,形成第二推理全景图。
在一些实施例中,对将前端的训练全景图翻译到后端的运行图之后,通过翻译为训练工作流实现对训练全景图中的待训练模型的训练过程;将训练好的模型,导入到前端已经搭建好但是没有数据的第一推理全景图中,得到能够对待处理任务进行推理的第二推理全景图中;可以通过以下步骤实现:
第一步,在所述第一推理全景图中,确定与所述训练全景图中包括已训练模型的操作单元相匹配的目标操作单元。
这里,首先,在训练全景图中,确定出已经训练好的模型对应的操作单元;然后,从第一推理全景图中,确定出未包括工作流数据的该操作单元。比如,在第一推理全景图中,确定出包括待训练检测模型的目标操作单元。
第二步,将所述已训练模型导入所述相匹配的目标操作单元,得到所述第二推理全景图。
这里,将已训练模型的工作流数据导入到第一推理全景图中,得到能够对待处理任务进行处理的第二推理全景图。比如,将训练全景图中训练好的检测模型,导入到第一推理图的操作单元中进行推理使用。如此,训练完成得到对应的不同模型后,直接导入到推理图中进行推理使用,提高了搭建整个处理流程的速度。
在一些实施例中,得到第二推理全景图之后,采用该第二推理全景图对待处理任务进行处理,即上述步骤S104可以通过以下步骤S141和S142(图示未示出)实现:
步骤S141,将所述待处理任务输入所述全景图中的第二推理全景图。
这里,由于第二推理全景图中包括已训练模型;将待处理任务输入到操作单元中包括已训练模型的第二推理全景图中,以便于通过将第二推理全景图转换为工作流数据实现对待处理任务的处理过程。
步骤S142,基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果。
这里,通过将前端的第二推理全景图转换为后端能够运行的中间结果图,进一步将该中间结果图转换为能够对任务进行处理的推理工作流,从而可以采用该推理工作流实现对待处理任务的处理。如此,通过将已训练的操作单元输出的已训练模型,组装成能够实现整个处理流程的第二推理全景图,能够便于直接被后端的任务调度工具调用。
在一些可能的实现方式中,可以通过以下步骤实现对待处理任务的处理:
第一步,将前端的所述第二推理全景图转换为后端的推理中间结果图。
这里,将前端的第二推理全景图转换为后端的推理中间结果图的实现过程,与,将前端的训练全景图转换为后端的训练中间结果图的实现过程相同。即,采用全景图翻译器,将第二推理全景图转换为后端的推理中间结果图。
第二步,基于所述推理中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第二运行图。
这里,所述第二运行图的起始点为所述推理中间结果图中的任一操作单元。第二运行图的起始点的选择依赖于要处理的任务;比如,待处理任务为检测任务,那么第二运行图的起始点为检测模型。按照该推理中间结果图的每一操作单元对应的预设图模板,可以得到与检测模型相关的检测工作流模板,按照该检测工作流模板,构建能够在后端运行的第二运行图。
第三步,将所述第二运行图转换为推理工作流。
采用推理转换器,将第二运行图转换为推理工作流,从而得到用于对待处理任务进行处理的工作流数据。
第四步,采用所述推理工作流,对所述待处理任务进行处理,得到所述处理结果。
在后端,采用转换后的推理工作流,对待处理任务进行处理,得到处理结果。比如,后端的调度工具,通过调用转换后的推理工作流,实现对待处理任务的处理。如此,通过将前端的推理图翻译成推理工作流,无缝连接各个不同功能的节点,能够完成整个处理过程的推理功能。
在一些实施例中,在待处理任务为分类识别任务的情况下,所述操作单元包括:检测数据集标注单元、抠图单元、检测单元、分类数据集标注单元和分类单元;所述资源单元包括:所述检测数据集标注单元在执行标注操作过程中输入和/或输出的数据,所述抠图单元在执行抠图操作过程中输入和/或输出的数据,所述检测单元在执行检测操作过程中输入和/或输出的数据,分类数据集标注单元在执行标注操作过程中输入和/或输出的数据,以及,所述分类单元在执行分类操作过程中输入和/或输出的数据。基于此,在待处理任务为分类识别任务的情况下,确对分类识别任务进行处理的过程如下:
第一步,基于操作单元和资源单元的对应关系,在前端搭建训练全景图和第一推理全景图。
在一些可能的实现方式中,采用Docker虚拟服务技术,将这些操作单元封装为虚拟化节点,包括:检测数据集标注节点、抠图节点、检测节点、分类数据集标注节点和分类节点。如此,将操作单元和资源单元封装为虚拟化的节点,以便于搭建实现整个方案流程的全景图。
第二步,对训练全景图中的待训练模型进行训练。
在一些可能的实现方式中,通过第一步和第二步封装得到的检测节点集合和分类节点集合,在这些节点中,选择用于对检测模型和分类模型进行训练的节点;采用数据集节点、检测标注节点、检测模型训练节点对待训练检测模型进行训练,采用抠图节点、分类标注节点和分类模型训练节点对待训练分类模型进行训练。如此,通过将虚拟化的节点进行串接,可实现对待训练检测模型和待训练分类模型的训练,从而完成全景图中的训练节点。
第三步,将训练全景图中包括已训练模型的操作单元相匹配的目标操作单元,导入第一推理全景图中,得到第二推理全景图。
在一些可能的实现方式中,按照操作单元之间的连接关系,确定训练节点结合之间的连接关系,即确定数据集节点、检测标注节点、检测模型训练节点、抠图节点、分类标注节点和分类模型训练节点之间的连接关系。在本公开实施例中,数据集节点、检测标注节点、检测模型训练节点、抠图节点、分类标注节点和分类模型训练节点的连接关系从上之下依次为:数据集节点、检测标注节点、检测模型训练节点、抠图节点、分类标注节点和分类模型训练节点。
在一些可能的实现方式中,按照训练节点集合中节点之间的连接顺序,依次将数据集节点、检测标注节点、检测模型训练节点、抠图节点、分类标注节点和分类模型训练节点串接起来,如图4所示,从D1数据集至分类模型训练节点407。
在一些实施例中,通过采用已获取的样本数据集,基于该待训练模型所对应的训练 工作流,实现对待训练模型的训练,以得到训练完成的训练图,并将训练完成的训练图应用于推理图中,从而实现对整个处理流程的构建。
第四步,采用第二推理全景图,对分类识别任务进行处理。
在本公开实施例中,对于分类识别任务,将包括已训练的检测模型和已训练的分类模型的推理全景图,翻译为推理工作流,对分类识别任务进行处理,在快速且便捷的搭建整个方案处理流程的同时还能够高效的实现对任务的处理。
下面,将说明本公开实施例在一个实际的应用场景中的示例性应用,以针对复杂场景下快速构建算法解决流程,以该复杂场景下的图像中的目标物体进行缺陷识别为例,进行说明。
深度学习算法在各个领域都取得了巨大的进展,也在很多工业领域取得了落地。然而,对于工业场景中的复杂问题,由于整个流程(pipeline)的复杂性、多样化、多模块以及多模态等问题,在算法解决方案上通常也需要适配多个不同模块算法的串接与融合。以人脸识别为例,通常需要包括人脸检测模块、人脸关键点模块、人脸质量模块、活体模块和人脸特征模块等。其他领域的算法解决方案同样需要多个算法模块的组合使用。
基于此,本公开实施提出了一种针对不同复杂场景下快速构建算法解决方案的方法。本公开实施例中将这种方法称作全景图,该方法基于全景图的概念,将不同的算法模块在全景图中称作为算法节点,将不同的数据处理模块封装为虚拟化节点,不同功能模块之间的串接在全景图中称作为边。基于全景图,能够快速串接不同的算法模块,达到整体构建整个算法方案的效果,为不同的复杂场景构建专门的全景图任务处理流程。
针对复杂的工业场景,基于全景图,可以直接在前端拖拽构建整个算法解决方案的图,其中的单点算法模块和数据处理模块等功能模块都被封装成Docker镜像,即全景图中的虚拟化节点。基于全景图翻译器,将前端的前景图翻译成对应的训练工作流和推理工作流,工作流基于k8s调度各个镜像,完成全方案的训练和推理功能。如图3所示,图3为本公开实施例提供的全景图翻译器的组成结构示意图,结合图3进行以下说明:
前端300包括:前端全景图301,表示在前端拖拽构建整个任务处理流程的图。
后端302包括:中间结果转换器321、图模板322、根据起始点构建运行图323、训练转换器324、推理转换器325,训练工作流326和推理工作流327;其中:
中间结果转换器321,为中间结果图(inter graph),是用户前端展示图的存储形式,包括节点(node)、操作单元(op)、连接线(link),而中间结果图中主要存在操作单元,并包括处理任务的多个模块。将前端的全景图翻译为一个中间的结果,比如,将前端的全景图翻译为连接数据的结构,用于描述各个模块的功能。
图模板322,表示各个功能模块对应于不同的配置,在中间结果图中对各个功能模块所需的参数进行配置,以使各个功能模块对应于不同的配置,得到该图模板322,即不同的功能模块对应不同的图模板。
根据起始点构建运行图323,在图模板322中选择一个待处理任务的起始点,基于该起始点,形成运行该任务的运行图。
训练转换器324,用于基于构建的运行图,对完成该任务的模板进行训练,并生成训练工作流326。
推理转换器325,用于基于训练好的模型,对该任务进行操作,并形成推理工作流327。
在大多数的工业场景中,比如,在对图像中的目标部件进行缺陷识别场景下,通常需要先检测出具体的部件,然后对具体部件的缺陷进行分类识别;因此,检测+分类可以作为实现缺陷识别的组合方案。在本公开实施例中,以检测模块+分类模块的串接为例,任务处理流程可以分为训练和推理两个阶段。在基于全景图的方案中,通过构建一张图的方式,在后端翻译成训练和推理两套工作流,在训练完成得到对应的不同模块的模型 后,导入到推理图中进行推理使用;如图4所示,图4为本公开实施例提供的全景图训练图的实现流程示意图,结合图4进行以下说明:
其中,全景图训练图401包括数据集节点、数据集标注节点、图片抠图节点、模型训练节点等,其中:
检测数据集标注节点402,用于数据的输入,在数据集节点中填写数据集的描述和相应的位置路径,其中,D1数据集为初始的数据集。检测数据集标注节点402输入为数据集(比如,数据集D1),用于对数据集进行检测任务和分类任务的标注,输出为数据集和对应的标注文件;输出的D2数据集在D1数据集的基础上增加了检测任务的标注信息,包括每张图上的检测框(bounding box,bbox)和标签(label)。
图片抠图节点403,输入为数据集以及标注文件(比如,数据集D2),用于根据标注的bbox对图片进行抠图处理,输出为新的数据集D3;即数据集D3为对图片中的具体部件进行抠图后的数据集,用于执行下游的分类任务。
分类数据集标注节点404,输入为分类数据集D3,用于执行分类任务标注功能,输出为数据集D4;数据集D4是在D3数据集基础上增加了分类标注信息,用来给分类模型训练使用。
检测模型训练节点405,输入为检测数据集D2及标注,用于训练对应的检测模型,输出为M1检测模型406。
分类模型训练节点407,输入为分类数据集D4及标注用于训练对应的分类模型,输出为M2分类模型408。
从D1数据集到M2分类模型408完成了全景图训练过程,对全景图训练完成之后,进行全景图推理,得到全景图推理图411,在全景图推理图411中包括:数据集节点,推理节点,结果转数据集节点和图片抠图节点等,其中:
检测推理节点412,用于表示数据的输入,在该节点中填写数据集的描述和相应的位置路径,这里数据集D5为待测试的数据集。输入为数据集D5和M1检测模型406,使用具体的M1检测模型,执行推理功能,输出为输入数据集的具体检测结果;即数据集D5的具体检测结果。
结果转数据集节点413,输入为推理结果,用于对推理结果进行转换,以满足下游任务数据输入,输出为符合下游任务格式的数据集D6,这里,数据集D6为满足下游分类任务的数据集。
图片抠图节点414,输入为检测推理得到的结果数据集,用于为根据预测的bbox对图片进行抠图处理,输出为抠图后的待分类数据集D7。
分类推理节点415,输入为待分类数据集D7和M2分类模型408,用于执行分类推理功能,输出为最终分类结果。如此,针对现实中复杂的工业场景,比如缺陷识别问题,用户需要先检测对应的部件,然后对相应的不同部件分别进行分类。基于本公开实施例提供的任务处理方法,用户可以构建一张完整的全景图,包括部件检测算法模块、图片抠图处理模块、不同的部件分类算方法模块、推理模块,通过翻译器转换得到对应的训练和推理工作流。
在本公开实施例中,首先,全景图基于图的概念,针对复杂的工业场景,能够快速便捷地构建完整的任务处理流程;然后,将整个算法全链条流程中的不同算法模块和数据处理模块封装成Docker镜像,作为图中的节点;最后,全景图基于翻译器,将前端的全景图翻译成对应的训练工作流和推理工作流,工作流中使用k8s调度不同镜像完成整套流程;从而,实现无缝连接各个不同功能的节点,完成全方案的训练和推理功能。
本公开实施例提供一种任务处理装置,图5为本公开实施例任务处理装置的结构组成示意图,如图5所示,所述任务处理装置500包括:
第一获取模块501,配置为获取待处理任务;
第一确定模块502,配置为确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;
第一构建模块503,配置为基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;
第一处理模块504,配置为基于所述全景图,对所述待处理任务进行处理,得到处理结果。
在一些实施例中,所述操作单元和所述资源单元均为至少两个,所述全景图包括训练全景图,所述第一构建模块503,包括:
第一确定子模块,配置为在至少两个资源单元中,确定作为每一操作单元的输入的第一资源单元和作为所述每一操作单元的输出的第二资源单元;
第一连接子模块,配置为将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图。
在一些实施例中,所述全景图包括第一推理全景图,所述第一构建模块503,包括:
第二确定子模块,配置为在前端全景图文件中确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;其中,所述前端全景图文件中包括至少两个操作单元和至少两个资源单元;
第一构建子模块,配置为基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
在一些实施例中,所述全景图包括第一推理全景图,所述第一构建模块503,包括:
第三确定子模块,配置为在所述全景图的训练全景图中,确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;
第二构建子模块,配置为基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
在一些实施例中,所述装置还包括:
第一训练模块,配置为对所述训练全景图中的包括待训练模型的操作单元进行训练;
第二确定模块,配置为基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定配置为对所述待处理任务进行推理的第二推理全景图;其中,所述已训练模型为对所述待训练模型进行训练得到的。
在一些实施例中,所述第一训练模块,包括:
第一转换子模块,配置为将前端的所述训练全景图转换为后端的训练中间结果图;
第三构建子模块,配置为基于所述训练中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第一运行图;其中,每一所述操作单元对应的预设图模板为前端基于任务设定的;所述第一运行图的起始点为所述训练中间结果图中的任一操作单元;
第二转换子模块,配置为将所述第一运行图,转换为能够训练所述第一运行图的功能的训练工作流;
第一训练子模块,配置为基于所述训练工作流,对所述待训练模型的操作单元进行训练。
在一些实施例中,所述第一训练子模块,包括:
第一确定单元,配置为基于所述训练工作流,确定所述第一运行图中的不同操作单元之间的逻辑关系;
第一训练单元,配置为按照所述逻辑关系,对所述待训练模型的操作单元进行训练。
在一些实施例中,所述第二确定模块,包括:
第四确定子模块,配置为在所述第一推理全景图中,确定与所述训练全景图中包括已训练模型的操作单元相匹配的目标操作单元;
第一导入子模块,配置为将所述已训练模型导入所述相匹配的目标操作单元,得到所述第二推理全景图。
在一些实施例中,所述第一处理模块504,包括:
第一输入子模块,配置为将所述待处理任务输入所述全景图中的第二推理全景图;
第一处理子模块,配置为基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果。
在一些实施例中,所述第一处理子模块,包括:
第一转换单元,配置为将前端的所述第二推理全景图转换为后端的推理中间结果图;
第一构建单元,配置为基于所述推理中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第二运行图;其中,所述第二运行图的起始点为所述推理中间结果图中的任一操作单元;
第二转换单元,配置为将所述第二运行图转换为推理工作流;
第一处理单元,配置为采用所述推理工作流,对所述待处理任务进行处理,得到所述处理结果。
在一些实施例中,在所述待处理任务为分类识别任务的情况下,所述操作单元至少包括:检测数据集标注单元、抠图单元、检测单元、分类数据集标注单元和分类单元;
所述资源单元至少包括:所述检测数据集标注单元在执行标注操作过程中输入和/或输出的数据,所述抠图单元在执行抠图操作过程中输入和/或输出的数据,所述检测单元在执行检测操作过程中输入和/或输出的数据,分类数据集标注单元在执行标注操作过程中输入和/或输出的数据,以及,所述分类单元在执行分类操作过程中输入和/或输出的数据。
需要说明的是,以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,本公开实施例中,如果以软件功能模块的形式实现上述的任务处理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是终端、服务器等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、运动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本公开实施例不限制于任何特定的硬件和软件结合。
对应地,本公开实施例再提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现本公开实施例提供的任务处理方法中的步骤。
本公开实施例再提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,所述该计算机可执行指令被处理器执行时实现上述实施例提供的任务处理方法的步骤。
本公开实施例提供一种计算机设备,图6为本公开实施例计算机设备的组成结构示意图,如图6所示,所述计算机设备600包括:一个处理器601、至少一个通信总线、通信接口602、至少一个外部通信接口和存储器603。其中,通信接口602配置为实现这些组件之间的连接通信。其中,通信接口602可以包括显示屏,外部通信接口可以包括标准的有线接口和无线接口。其中所述处理器601,配置为执行存储器中图像处理程序,以实现上述实施例提供的任务处理方法的步骤。
以上任务处理装置、计算机设备和存储介质实施例的描述,与上述方法实施例的描 述是类似的,具有同相应方法实施例相似的技术描述和有益效果,限于篇幅,可案件上述方法实施例的记载,故在此不再赘述。对于本公开任务处理装置、计算机设备和存储介质实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本公开上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开提供了一种任务处理方法及装置、电子设备和存储介质;该方法包括:获取待处理任务;确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至 少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;基于所述全景图,对所述待处理任务进行处理,得到处理结果。

Claims (20)

  1. 一种任务处理方法,所述方法由电子设备执行,所述方法包括:
    获取待处理任务;
    确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;
    基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;
    基于所述全景图,对所述待处理任务进行处理,得到处理结果。
  2. 根据权利要求1所述的方法,其中,所述操作单元和所述资源单元均为至少两个,所述全景图包括训练全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:
    在至少两个资源单元中,确定作为每一操作单元的输入的第一资源单元和作为所述每一操作单元的输出的第二资源单元;
    将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图。
  3. 根据权利要求1或2所述的方法,其中,所述全景图包括第一推理全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:
    在前端全景图文件中确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;其中,所述前端全景图文件中包括至少两个操作单元和至少两个资源单元;
    基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
  4. 根据权利要求1或2所述的方法,其中,所述全景图包括第一推理全景图,所述基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图,包括:
    在所述全景图的训练全景图中,确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;
    基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
  5. 根据权利要求3或4所述的方法,其中,将所述每一操作单元和对应的第一资源单元和第二资源单元进行连接,得到所述训练全景图之后,所述方法还包括:
    对所述训练全景图中的包括待训练模型的操作单元进行训练;
    基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定用于对所述待处理任务进行推理的第二推理全景图;其中,所述已训练模型为对所述待训练模型进行训练得到的。
  6. 根据权利要求5所述的方法,其中,所述对所述训练全景图中的包括待训练模型的操作单元进行训练,包括:
    将前端的所述训练全景图转换为后端的训练中间结果图;
    基于所述训练中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第一运行图;其中,每一所述操作单元对应的预设图模板为前端基于任务设定的;所述第一运行图的起始点为所述训练中间结果图中的任一操作单元;
    将所述第一运行图,转换为能够训练所述第一运行图的功能的训练工作流;
    基于所述训练工作流,对所述待训练模型的操作单元进行训练。
  7. 根据权利要求6所述的方法,其中,所述基于所述训练工作流,对所述待训练模型的操作单元进行训练,包括:
    基于所述训练工作流,确定所述第一运行图中的不同操作单元之间的逻辑关系;
    按照所述逻辑关系,对所述待训练模型的操作单元进行训练。
  8. 根据权利要求5至7任一项所述的方法,其中,所述基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定用于对所述待处理任务进行推理的第二推理全景图,包括:
    在所述第一推理全景图中,确定与所述训练全景图中包括已训练模型的操作单元相匹配的目标操作单元;
    将所述已训练模型导入所述相匹配的目标操作单元,得到所述第二推理全景图。
  9. 根据权利要求1至8任一项所述的方法,其中,所述基于所述全景图,对所述待处理任务进行处理,得到处理结果,包括:
    将所述待处理任务输入所述全景图中的第二推理全景图;
    基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果。
  10. 根据权利要求9所述的方法,其中,所述基于所述第二推理全景图,对所述待处理任务进行处理,得到所述处理结果,包括:
    将前端的所述第二推理全景图转换为后端的推理中间结果图;
    基于所述推理中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第二运行图;其中,所述第二运行图的起始点为所述推理中间结果图中的任一操作单元;
    将所述第二运行图转换为推理工作流;
    采用所述推理工作流,对所述待处理任务进行处理,得到所述处理结果。
  11. 根据权利要求1至10任一项所述的方法,其中,在所述待处理任务为分类识别任务的情况下,所述操作单元至少包括:检测数据集标注单元、抠图单元、检测单元、分类数据集标注单元和分类单元;
    所述资源单元至少包括:所述检测数据集标注单元在执行标注操作过程中输入和/或输出的数据,所述抠图单元在执行抠图操作过程中输入和/或输出的数据,所述检测单元在执行检测操作过程中输入和/或输出的数据,分类数据集标注单元在执行标注操作过程中输入和/或输出的数据,以及,所述分类单元在执行分类操作过程中输入和/或输出的数据。
  12. 一种任务处理装置,所述装置包括:
    第一获取模块,配置为获取待处理任务;
    第一确定模块,配置为确定实现所述待处理任务的操作单元和资源单元;其中,所述操作单元至少包括对所述待处理任务进行处理操作,所述资源单元包括所述操作单元在执行所述处理操作过程中输入和/或输出的数据;
    第一构建模块,配置为基于所述待处理任务的操作单元和所述资源单元,构建包括所述待处理任务的处理流程的全景图;
    第一处理模块,配置为基于所述全景图,对所述待处理任务进行处理,得到处理结果。
  13. 根据权利要求12所述的装置,其中,所述操作单元和所述资源单元均为至少两个,所述全景图包括训练全景图,所述第一构建模块,包括:
    第一确定子模块,配置为在至少两个资源单元中,确定作为每一操作单元的输入的第一资源单元和作为所述每一操作单元的输出的第二资源单元;
    第一连接子模块,配置为将所述每一操作单元和对应的第一资源单元和第二资源单 元进行连接,得到所述训练全景图。
  14. 根据权利要求12或13所述的装置,其中,所述全景图包括第一推理全景图,所述第一构建模块,包括:
    第二确定子模块,配置为在前端全景图文件中确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;其中,所述前端全景图文件中包括至少两个操作单元和至少两个资源单元;
    第一构建子模块,配置为基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
  15. 根据权利要求12或13所述的装置,其中,所述全景图包括第一推理全景图,所述第一构建模块,包括:
    第三确定子模块,配置为在所述全景图的训练全景图中,确定与所述待处理任务的处理流程相匹配的目标操作单元和目标资源单元;
    第二构建子模块,配置为基于所述目标操作单元和所述目标资源单元,构建未包括工作流数据的所述第一推理全景图。
  16. 根据权利要求14或15所述的装置,其中,所述装置还包括:
    第一训练模块,配置为对所述训练全景图中的包括待训练模型的操作单元进行训练;
    第二确定模块,配置为基于所述训练全景图中包括已训练模型的操作单元和所述第一推理全景图,确定配置为对所述待处理任务进行推理的第二推理全景图;其中,所述已训练模型为对所述待训练模型进行训练得到的。
  17. 根据权利要求16所述的装置,其中,所述第一训练模块,包括:
    第一转换子模块,配置为将前端的所述训练全景图转换为后端的训练中间结果图;
    第三构建子模块,配置为基于所述训练中间结果图中的每一操作单元对应的预设图模板,构建具有起始点的第一运行图;其中,每一所述操作单元对应的预设图模板为前端基于任务设定的;所述第一运行图的起始点为所述训练中间结果图中的任一操作单元;
    第二转换子模块,配置为将所述第一运行图,转换为能够训练所述第一运行图的功能的训练工作流;
    第一训练子模块,配置为基于所述训练工作流,对所述待训练模型的操作单元进行训练。
  18. 一种计算机存储介质,其中,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至11任一项所述的任务处理方法。
  19. 一种计算机设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时能够实现权利要求1至11任一项所述的任务处理方法。
  20. 一种计算机程序产品,其中,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至11任一项所述的任务处理方法。
PCT/CN2021/124779 2021-05-25 2021-10-19 任务处理方法及装置、电子设备和存储介质 WO2022247110A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110570235.4 2021-05-25
CN202110570235.4A CN113342488A (zh) 2021-05-25 2021-05-25 任务处理方法及装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022247110A1 true WO2022247110A1 (zh) 2022-12-01

Family

ID=77471235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124779 WO2022247110A1 (zh) 2021-05-25 2021-10-19 任务处理方法及装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN113342488A (zh)
WO (1) WO2022247110A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342488A (zh) * 2021-05-25 2021-09-03 上海商汤智能科技有限公司 任务处理方法及装置、电子设备和存储介质
CN114782445B (zh) * 2022-06-22 2022-10-11 深圳思谋信息科技有限公司 对象缺陷检测方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100306423A1 (en) * 2009-05-26 2010-12-02 Fujitsu Semiconductor Limited Information processing system and data transfer method
CN110378254A (zh) * 2019-07-03 2019-10-25 中科软科技股份有限公司 车损图像修改痕迹的识别方法、系统、电子设备及存储介质
CN111310936A (zh) * 2020-04-15 2020-06-19 光际科技(上海)有限公司 机器学习训练的构建方法、平台、装置、设备及存储介质
CN111435352A (zh) * 2019-01-11 2020-07-21 北京京东尚科信息技术有限公司 一种分布式实时计算方法、装置、系统及其存储介质
CN113342488A (zh) * 2021-05-25 2021-09-03 上海商汤智能科技有限公司 任务处理方法及装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100306423A1 (en) * 2009-05-26 2010-12-02 Fujitsu Semiconductor Limited Information processing system and data transfer method
CN111435352A (zh) * 2019-01-11 2020-07-21 北京京东尚科信息技术有限公司 一种分布式实时计算方法、装置、系统及其存储介质
CN110378254A (zh) * 2019-07-03 2019-10-25 中科软科技股份有限公司 车损图像修改痕迹的识别方法、系统、电子设备及存储介质
CN111310936A (zh) * 2020-04-15 2020-06-19 光际科技(上海)有限公司 机器学习训练的构建方法、平台、装置、设备及存储介质
CN113342488A (zh) * 2021-05-25 2021-09-03 上海商汤智能科技有限公司 任务处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN113342488A (zh) 2021-09-03

Similar Documents

Publication Publication Date Title
US10534605B2 (en) Application system having a gaming engine that enables execution of a declarative language
JP6944548B2 (ja) 自動コード生成
WO2022247110A1 (zh) 任务处理方法及装置、电子设备和存储介质
US10453165B1 (en) Computer vision machine learning model execution service
CN103518183B (zh) 图形对象分类
WO2020140940A1 (zh) 代码的生成方法、装置、设备及存储介质
US20200175975A1 (en) Voice interaction for image editing
WO2022247112A1 (zh) 任务处理方法、装置、设备、存储介质、计算机程序及程序产品
CN112099848B (zh) 一种业务处理方法、装置及设备
US20220391176A1 (en) Configuring machine learning models for training and deployment using graphical components
CN113448678A (zh) 应用信息生成方法、部署方法及装置、系统、存储介质
US11822896B2 (en) Contextual diagram-text alignment through machine learning
CN116756338A (zh) 面向ar装配引导的工艺知识图谱构建方法及系统
US10685470B2 (en) Generating and providing composition effect tutorials for creating and editing digital content
CN110312990A (zh) 配置方法及系统
CN115563334A (zh) 图文数据的处理方法和处理器
US20220383150A1 (en) Instantiating machine-learning models at on-demand cloud-based systems with user-defined datasets
US11720942B1 (en) Interactive retrieval using visual semantic matching
US20220318887A1 (en) Machine learning model generation platform
CN111176624B (zh) 一种流式计算指标的生成方法及装置
CN113821652A (zh) 模型数据处理方法、装置、电子设备以及计算机可读介质
Giesemann et al. A comprehensive ASIC/FPGA prototyping environment for exploring embedded processing systems for advanced driver assistance applications
US20240192950A1 (en) Container name identification processing
US20240176641A1 (en) Apparatus and method for executing digital twin
US11928572B2 (en) Machine learning model generator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942674

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21942674

Country of ref document: EP

Kind code of ref document: A1