CN113590286A - Task processing system and task processing method - Google Patents

Task processing system and task processing method Download PDF

Info

Publication number
CN113590286A
CN113590286A CN202110856820.0A CN202110856820A CN113590286A CN 113590286 A CN113590286 A CN 113590286A CN 202110856820 A CN202110856820 A CN 202110856820A CN 113590286 A CN113590286 A CN 113590286A
Authority
CN
China
Prior art keywords
module
neural network
training
task
resource module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110856820.0A
Other languages
Chinese (zh)
Inventor
高原
林成龙
徐子豪
李韡
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110856820.0A priority Critical patent/CN113590286A/en
Publication of CN113590286A publication Critical patent/CN113590286A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The embodiment of the disclosure discloses a task processing system and a task processing method, which comprise a resource module, a training module and an inference module; the training module is used for calling a sample in the resource module based on the user type to train the neural network in the resource module to obtain the trained neural network and sending the trained neural network to the resource module; and the reasoning module is used for calling the trained neural network in the resource module to process the reasoning task based on the task type of the reasoning task to obtain a reasoning result and sending the reasoning result to the resource module.

Description

Task processing system and task processing method
Technical Field
The embodiment of the present disclosure relates to, but is not limited to, the technical field of Artificial Intelligence (AI), and in particular, to a task processing system and a task processing method.
Background
Artificial intelligence is a new technical science for studying and developing theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. Neural networks are hot spots of artificial intelligence research and have achieved wide success in a number of research fields such as pattern recognition, automatic control, signal processing, decision assistance, artificial intelligence and the like.
However, how to conveniently train the neural network and use the trained neural network by the user is a problem that has been always focused in the field.
Disclosure of Invention
The embodiment of the disclosure provides a task processing system and a task processing method.
The embodiment of the disclosure provides a task processing system, which comprises a resource module, a training module and an inference module;
the training module is used for calling a sample in the resource module based on the user type to train the neural network in the resource module to obtain the trained neural network and sending the trained neural network to the resource module; and the reasoning module is used for calling the trained neural network in the resource module to process the reasoning task based on the task type of the reasoning task to obtain a reasoning result and sending the reasoning result to the resource module.
In some embodiments, the training module is further configured to: acquiring configuration information of a training task under the condition that the user type is a first type; determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module, and sending the neural network to the resource module.
In this way, when the user type is the first type, the training module selects a neural network matched with the configuration information of the training task from the plurality of neural networks in the resource module, so that different neural networks can be determined based on the configuration information of different training tasks, the pertinence of the determined neural network is further improved, and in addition, the neural network can be rapidly determined because the neural network is selected from the plurality of neural networks in the resource module.
In some embodiments, the training module is further configured to: under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample; and determining the neural network corresponding to the first panoramic image, and sending the neural network to the resource module.
Therefore, under the condition that the user type is the second type, the training module acquires the first panoramic image and determines the neural network corresponding to the first panoramic image, so that the customized neural network can be manufactured, the determined neural network can greatly meet the requirements, and the pertinence of the determined neural network is improved; in addition, the first panorama includes operation units and resource units, and the resource units are associated with the samples, so that the first panorama of the complex scene can be customized, and the determined neural network can be applied to each scene.
In some embodiments, the training module is further configured to: converting the first panorama into at least two workflow templates in a linear order; determining the neural network that matches the linearly ordered at least two workflow templates.
Therefore, the first panoramic image is converted into the at least two workflow templates which are linearly ordered, and then the neural network matched with the at least two workflow templates which are linearly ordered is determined, the at least two workflow templates which are linearly ordered can be connected with the first panoramic image and the neural network, the problem that the first panoramic image is difficult to convert into the neural network is solved, and the neural network is matched with the at least two workflow templates which are linearly ordered, so that the neural network can realize ordered training of complex scenes.
In some embodiments, the task processing system further comprises an annotation module; the labeling module is used for acquiring an unlabeled sample set and labeled attribute information from the resource module, wherein the labeled attribute information is used for labeling at least part of unlabeled samples in the unlabeled sample set; the marking module is further used for acquiring a training sample set and sending the training sample set to the resource module; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network.
In this way, the labeling module can acquire the unlabeled sample set and the labeled attribute information from the resource module, so that a labeling person can label at least part of unlabeled samples in the unlabeled sample set based on the labeled attribute information to obtain a training sample set, and can label the samples in the unlabeled sample set in a targeted manner, thereby improving the pertinence of labeling and the accuracy of the trained neural network; in addition, the user corresponding to the user type can be the same person or different persons as the marking person, so that the task can be subdivided, and the efficiency of the trained neural network is improved.
In some embodiments, the task processing system further comprises an evaluation module; the evaluation module is used for at least one of the following:
calling at least two first test results in the resource module; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; sending the evaluation result information to the resource module;
calling at least two second test results in the resource module; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module.
Thus, the evaluation result information of the trained neural network is determined based on at least two first test results, so that different test sample sets can be used for respectively testing the trained neural network, and the adaptability of the trained neural network to different test sample levels can be further determined; by respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results, a superior sub-neural network can be selected from the at least two sub-neural networks based on the at least two pieces of evaluation result information.
In some embodiments, the task processing system further comprises a scheduling module; the scheduling module is to at least one of:
scheduling the training module to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module;
scheduling the training module to adopt at least two first test sample sets to test the trained neural network to obtain at least two first test results, and sending the at least two first test results to the resource module;
and scheduling the training module to adopt a second test sample set, respectively testing at least two sub-neural networks included in the trained neural network to obtain at least two second test results, and sending the at least two second test results to the resource module.
Therefore, the scheduling module can schedule the training module to train the neural network and test the trained neural network, so that the training module can be used for realizing the operation of training and testing, the pertinence of the training module in processing tasks is improved, and the processing efficiency of the tasks is improved.
In some embodiments, the configuration information of the training task comprises at least one of: network type information, network function information and network application scene information; the training module is further configured to send configuration information of the training task to the resource module.
In this way, the training module is further configured to send the configuration information of the training task to the resource module, so that the configuration information of the training task can be acquired from the resource module under the condition that the configuration information of the training task is required to be used subsequently, and the configuration information of the training task can be effectively managed.
In some embodiments, the inference module is further operable to: determining a service port corresponding to the trained neural network; acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result; and outputting the processing result through the service port.
Therefore, the inference module can determine the service port corresponding to the trained neural network, so that the user can use the trained neural network through the service port, and the convenience of using the trained neural network by the user is improved.
In some embodiments, the annotation module is further configured to: acquiring a labeling operation interface from the resource module; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
Therefore, the labeling module can acquire the labeling operation interface from the resource module, so that a labeling person can label the unlabeled sample based on a labeling tool in the labeling operation interface, the labeling format of the unlabeled sample can be matched with the labeling format in the sample for training the neural network, and the condition that the training cannot be performed due to the fact that the labeling format is not uniform with the required labeling format is reduced.
The embodiment of the disclosure provides a task processing method, which includes: calling a sample in a resource module based on a user type to train a neural network in the resource module to obtain a trained neural network and sending the trained neural network to the resource module; and calling the trained neural network in the resource module based on the task type of the reasoning task to process the reasoning task to obtain a reasoning result and sending the reasoning result to the resource module.
In some embodiments, the method further comprises: acquiring configuration information of a training task under the condition that the user type is a first type; determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module, and sending the neural network to the resource module.
In some embodiments, the method further comprises: under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample; and determining the neural network corresponding to the first panoramic image, and sending the neural network to the resource module.
In some embodiments, the determining the neural network corresponding to the first panorama, sending the neural network to the resource module, comprises: converting the first panorama into at least two workflow templates in a linear order; determining the neural network that matches the linearly ordered at least two workflow templates.
In some embodiments, the method further comprises: acquiring an unlabelled sample set and labeled attribute information from the resource module, wherein the labeled attribute information is used for labeling at least part of unlabelled samples in the unlabelled sample set; acquiring a training sample set, and sending the training sample set to the resource module; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network.
In some embodiments, the method further comprises one of:
calling at least two first test results in the resource module; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; sending the evaluation result information to the resource module;
calling at least two second test results in the resource module; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module.
In some embodiments, the method further comprises one of:
scheduling the training module to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module;
scheduling the training module to adopt at least two first test sample sets to test the trained neural network to obtain at least two first test results, and sending the at least two first test results to the resource module;
and scheduling the training module to adopt a second test sample set, respectively testing at least two sub-neural networks included in the trained neural network to obtain at least two second test results, and sending the at least two second test results to the resource module.
In some embodiments, the configuration information of the training task comprises at least one of: network type information, network function information and network application scene information; the method further comprises the following steps: and sending the configuration information of the training task to the resource module.
In some embodiments, the method further comprises: determining a service port corresponding to the trained neural network; acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result; and outputting the processing result through the service port.
In some embodiments, the method further comprises: acquiring a labeling operation interface from the resource module; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
In the embodiment of the disclosure, the training module calls the samples in the resource module to train the neural network in the resource module, so that the neural network can be conveniently trained; the reasoning module calls the trained neural network in the resource module to process the reasoning task, so that the trained neural network can be conveniently used; the called sample and the neural network are determined based on the user type, so that the sample and the neural network can be reasonably selected according to different user types, and the pertinence of the selected sample and the neural network is improved; the called trained neural network is determined based on the task type of the inference task, so that the proper trained neural network can be selected according to different task types of the inference task, and the pertinence of the selected trained neural network is improved; the trained neural network and the reasoning result are sent to the resource module, so that the resource module can store the trained neural network and the reasoning result, and the trained neural network and the reasoning result can be conveniently read from the resource module.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive efforts. The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure.
Fig. 1 is a schematic structural diagram of a task processing system according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a task processing system according to another embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a method for processing a task by using a task processing system according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a task processing method according to an embodiment of the disclosure.
Detailed Description
The technical solution of the present disclosure will be specifically described below by way of examples with reference to the accompanying drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
It should be noted that: in the examples of this disclosure, "first," "second," etc. are used to distinguish similar objects and are not necessarily used to describe a particular order or sequence.
In addition, the technical solutions described in the embodiments of the present disclosure can be arbitrarily combined without conflict.
To train a neural network, a sample needs to be prepared, then the sample is labeled, then a training task is initiated, and the neural network after the training is finished is trained. And then evaluating the trained neural network to obtain the trained neural network with ideal effect, and then using the trained neural network. However, most of the above processes need to be handled manually by workers, which greatly consumes human resources.
In the embodiment of the present disclosure, a task processing system is provided, which is capable of implementing or assisting to implement the above operations, and improves the automation level of processing tasks.
Fig. 1 is a schematic structural diagram of a task processing system according to an embodiment of the present disclosure, and as shown in fig. 1, a task processing system 10 includes: a resource module 11, a training module 12 and an inference module 13;
the training module 12 is configured to invoke a sample in the resource module 11 to train a neural network in the resource module 11 based on a user type, obtain a trained neural network, and send the trained neural network to the resource module 11;
the reasoning module 13 is configured to call the trained neural network in the resource module 11 to process the reasoning task based on a task type of the reasoning task, obtain a reasoning result, and send the reasoning result to the resource module 11.
In some embodiments, the task processing system 10 may include a staging, a background, or a combination of staging and background. The task processing system 10 may be referred to as a service platform. In other embodiments, the task processing system 10, or the following customer premise equipment or label premise equipment, may include one or a combination of at least two of the following: wearable devices such as a server, a Mobile Phone (Mobile Phone), a tablet personal computer (Pad), a computer with a wireless transceiving function, a palm computer, a desktop computer, a personal digital assistant, a portable media player, an intelligent sound box, a navigation device, an intelligent watch, intelligent glasses, an intelligent necklace and the like, and a pedometer, digital TV, Virtual Reality (VR) terminal device, Augmented Reality (AR) terminal device, wireless terminal in Industrial Control (Industrial Control), wireless terminal in unmanned Driving (Self Driving), wireless terminal in Remote Medical Surgery (Remote Surgery), wireless terminal in Smart Grid, wireless terminal in Transportation Safety (Transportation Safety), wireless terminal in Smart City (Smart City), wireless terminal in Smart Home (Smart Home), car in car networking system, car-mounted device, car-mounted module, and the like.
The user type may be a first type characterizing that the user is a non-professional user (or a so-called caucasian user) or a second type characterizing that the user is a professional user. The first type of user is a user who does not provide the panorama to the task processing system 10, and the second type of user is a user who can provide the panorama to the task processing system 10. The first type of user may be a normal user and the second type of user may include at least one of: staff, technicians, and professionals.
Training module 12 may be in communication with a client device, and training module 12 may receive a training task sent by the client device and train the neural network based on the training task.
In some embodiments, attribute information of the user may be included in the training task, and training module 12 may determine the user type based on the attribute information of the user.
The attribute information of the user may include at least one of: account information of the user, identity information of the user, occupational information of the user, physiological information of the user (e.g., facial features and/or posture features), and the like. For example, in a case that the attribute information of the user includes account information of the user, the training module 12 may determine a user type corresponding to the account information of the user.
In other embodiments, training module 12 may determine the user type by determining whether a panorama exists in the training task. For example, in the case where it is determined that the panorama does not exist in the training task, the user type is determined to be the first type, and in the case where it is determined that the panorama exists in the training task, the user type is determined to be the second type.
In some embodiments, training module 12 calls samples in resource module 11, which may be input by a client device.
In other embodiments, training module 12 invokes a sample in resource module 11, which may be determined based on the type of user. For example, in the case that the user type is determined to be the first type or the second type, a sample corresponding to the first type or the second type may be determined from the resource module 11, where the sample may include a plurality of samples, and the plurality of samples may be all samples with tags, or all samples without tags, or may be partially samples with tags and partially samples without tags.
In some embodiments, the annotation complexity and/or image complexity of the samples corresponding to the first type may be lower than the annotation complexity and/or image complexity of the samples corresponding to the second type.
The neural network within resource module 11 that training module 12 invokes may be determined based on the user type. For example, in the case that the user type is determined to be the first type or the second type, a neural network corresponding to the first type or the second type may be determined from the resource module 11, and the neural network may be one neural network, or may include at least two sub-neural networks.
In some embodiments, the complexity of the neural network and/or the size of the neural network corresponding to the first type may be lower than the complexity of the neural network and/or the size of the neural network corresponding to the second type.
In the case where the plurality of samples are all labeled samples, the training of the neural network may be supervised training; in the case where the plurality of samples are unlabeled samples, the training of the neural network may be unsupervised training; in the case where a portion of the plurality of samples is labeled samples and a portion is unlabeled samples, the training of the neural network may be semi-supervised training.
The inference module 13 may have a service port, and the user end device may communicate with the inference module 13 through the service port, and the inference module 13 may obtain the inference task sent by the user end device through the service port. The task type of the inference task may include at least one of: downloading the information of the trained neural network; information for testing the trained neural network; information for use of the trained neural network.
For example, in a case that the task type of the inference task is downloading information of the trained neural network, the inference module 13 may determine a storage address (included in the inference result) corresponding to the trained neural network, and return the storage address to the user end device through the service port, so that the service end device downloads the trained neural network through the storage address.
For another example, in a case that the task type of the inference task is information for testing the trained neural network, the inference module 13 may further receive a test sample sent by the user end device through the service port or call the test sample from the resource module 11, test the trained neural network through the test sample to obtain a test result (included in the inference result), and send the test result to the user end device through the service port.
For another example, in a case that the task type of the inference task is information for using the trained neural network, the inference module 13 may further receive data to be processed sent by the user end device through the service port, process the data to be processed to obtain a processing result (included in the inference result), and send the processing result to the user end device through the service port.
In the embodiment of the present disclosure, since the resource module 11 is provided, multi-link data sharing or data transfer can be realized.
In the embodiment of the present disclosure, the training module 12 calls the samples in the resource module 11 to train the neural network in the resource module 11, so that the neural network can be conveniently trained; the reasoning module 13 calls the trained neural network in the resource module 11 to process the reasoning task, so that the trained neural network can be conveniently used; the called sample and the neural network are determined based on the user type, so that the sample and the neural network can be reasonably selected according to different user types, and the pertinence of the selected sample and the neural network is improved; the called trained neural network is determined based on the task type of the inference task, so that the proper trained neural network can be selected according to different task types of the inference task, and the pertinence of the selected trained neural network is improved; the trained neural network and the reasoning result are sent to the resource module 11, so that the resource module 11 can store the trained neural network and the reasoning result, and the trained neural network and the reasoning result can be conveniently read from the resource module 11.
In some embodiments, the training module 12 is further configured to: acquiring configuration information of a training task under the condition that the user type is a first type; determining the neural network which is matched with the configuration information of the training task from a plurality of neural networks in the resource module 11, and sending the neural network to the resource module 11.
The configuration information of the training task may include at least one of: network type information of the neural network, network function information of the neural network, and network application scenario information of the neural network.
In some embodiments, before the user end device initiates the training task, the user end device may present configuration information of the training task, and the user may input at least one of network type information, network function information, and network application scenario information through the user end device. Configuration information for the training task may be included in the training task such that training module 12 may obtain the configuration information for the training task from the training task.
The network type information may be at least one of a classification network, a detection network, a segmentation network.
A classification network may be used to classify incoming data. The detection network may detect objects in the input data. Segmenting the network may segment objects in the network. For example, in the case of inputting an image to a classification network, the category of the image may be output through the classification network. For example, in the case of inputting an image to a detection network, an image carrying a detection frame for an object can be obtained by the detection network. For example, in the case of inputting an image to a segmentation network, contour information for an object may be output through the segmentation network. The segmentation network may comprise a semantic segmentation network, a panorama segmentation network, or an instance segmentation network.
The network function information may refer to function information implemented by the neural network, for example, implementing at least one of: cat and dog classification, face detection in an image, human body detection in an image, contour detection of an object in an image, and the like.
The network application scenario information may refer to a scenario in which the network can be applied, for example, the network application scenario information is a scenario of a park, a subway, a mall, an office building, or a factory. The neural network in the non-application scenario and/or the samples used to train the neural network are different.
In some embodiments, where the configuration information for the training task includes a cat and dog classification, training module 12 determines a neural network from the plurality of neural networks that matches the cat and dog classification.
The disclosed embodiments are not limited thereto, and the training module 12 may also determine a trained neural network matching the configuration information of the training task from a plurality of trained networks in the resource module 11.
The training module 12 may obtain the neural network when receiving the sample sent by the user end device, or when being able to call the sample from the resource module 11; and acquiring the trained neural network under the condition that the sample sent by the customer premise equipment cannot be received or the sample cannot be called from the resource module 11.
In the embodiment of the present disclosure, in the case that the user type is the first type, the training module 12 selects a neural network that matches the configuration information of the training task from the plurality of neural networks in the resource module 11, so that different neural networks can be determined based on the configuration information of different training tasks, thereby improving the pertinence of the determined neural network, and in addition, since the neural network is selected from the plurality of neural networks in the resource module 11, the neural network can be determined quickly.
In some embodiments, the training module 12 is further configured to: under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample; determining the neural network corresponding to the first panorama, and sending the neural network to the resource module 11.
For example, training module 12 may receive a first panorama transmitted by a client device. In an implementation process, a user end device may display a canvas, which is used for a user to drag different components on an artificial intelligence training platform to construct a first panorama.
The storage file of the first panorama may include attributes of at least two operation units and attributes of at least two resource units. Each operation unit is a virtualization node which encapsulates one algorithm program; the resource units may be represented as data nodes (nodes) having an input relationship and/or an output relationship with the operation unit, each resource unit is a virtualized node after encapsulating one data processing module, and the data processing module provides input data for a certain algorithm program or processes output data of another algorithm program.
In some embodiments, a resource unit may be an input to an operation unit; in some embodiments, a resource unit is the output of an operation unit; in other embodiments, a resource unit is both the output of a previous operation unit and the input of a next operation unit.
The first panorama can be stored in the artificial intelligence training platform in the form of a file. In some embodiments, the first panorama can include attributes of the operation units and attributes of the data resource units; in other embodiments, the first panorama includes attributes of the operation units, attributes of the data resource units, and connection relationships between the operation units and the data resource units, such as connection lines (links). The attribute of the operation unit may include functions of training, reasoning, evaluating, etc. of the corresponding algorithm program, and may also include a resource unit name connected with the operation unit. The attributes of the resource unit may include data entities in the model training or reasoning process, and may also include data set interface functions, formats of input and output data, picture sizes, and the like.
In some embodiments, training module 12 is configured to receive the first panorama from the client device.
In other embodiments, training module 12 is further configured to: acquiring configuration information of a training task under the condition that the user type is a second type; based on the configuration information of the training task, a first panorama is determined. In this manner, the user only needs to provide the configuration information of the training task, and training module 12 may determine the matching first panorama according to the configuration information of the training task.
In still other embodiments, the training module 12 is further configured to, if the configuration information of the training task is obtained, send the configuration information of the training task to another client device, so that the another client device determines the matching first panorama based on the configuration information of the training task, and the training module 12 may receive the first panorama sent by the another client device.
In some embodiments, training module 12 may determine the neural network based on the first panorama. In other embodiments, training module 12 may be combined with a scheduling module in task processing system 10 to determine a neural network based on the first panorama.
In the embodiment of the present disclosure, when the user type is the second type, the training module 12 obtains the first panoramic image, and determines the neural network corresponding to the first panoramic image, so that the customized neural network can be manufactured, the determined neural network can greatly meet the requirement, and the pertinence of the determined neural network is improved; in addition, the first panorama includes operation units and resource units, and the resource units are associated with the samples, so that the first panorama of the complex scene can be customized, and the determined neural network can be applied to each scene.
In some embodiments, the inference module 13 is further configured to: determining a service port corresponding to the trained neural network; acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result; and outputting the processing result through the service port.
In some embodiments, the inference module 13 may have a target port through which to communicate with the customer premises equipment, and the inference module 13 may send a service port to the customer premises equipment through the target port so that the customer premises equipment uses the trained neural network through the service port.
In other embodiments, the inference module 13 may send the service port to the resource module 11, and the training module 12 may obtain the service port from the resource module 11 and send the service port to the client device.
In the embodiment of the present disclosure, the inference module 13 can determine the service port corresponding to the trained neural network, so that the user can use the trained neural network through the service port, thereby improving the convenience of using the trained neural network by the user.
In some embodiments, training module 12 is to convert the first panorama to at least two workflow templates that are linearly ordered; determining the neural network that matches the linearly ordered at least two workflow templates.
Converting the first panorama to at least two workflow templates in a linear ordering may include: and converting the first panoramic image into an intermediate file, determining an intermediate result image corresponding to the intermediate file, and performing topological sorting on all the operation units in the intermediate file to obtain at least two linearly-sorted workflow templates.
In some embodiments, the intermediate file is an intermediate storage form of a set graph, and the attribute of each operation unit in the intermediate file includes an attribute of a resource unit having an input relationship and/or an output relationship with each operation unit. One possible implementation is that, for all the operation units in the first panorama, the attribute of the input resource unit or the output resource unit of each operation unit is incorporated into the attribute of the corresponding operation unit; meanwhile, determining the connection relation between two operation units which have input and output relations with the same resource unit based on the connection relation between the operation units in the first panorama; then, the attributes of all the operation units and the connection relationship between every two operation units in the first panorama can be saved to obtain a converted intermediate file, or the connection relationship can be incorporated into the attributes of the corresponding operation units to directly store the attributes of all the operation units to obtain the converted intermediate file.
In some embodiments, each operation unit and the connection relationship between each operation unit in the intermediate file can be extracted; and then, connecting each operation unit in the intermediate file according to the connection relation to obtain an intermediate result graph. And forming an intermediate result graph based on the operation units in the intermediate file and the connection relation between the operation units, and providing support for the conversion of other subsequent function graphs.
In some embodiments, the linear arrangement result corresponding to the intermediate result graph can be obtained only when the intermediate result graph is a directed acyclic graph.
Determining the neural network that matches the linearly ordered at least two workflow templates may include: acquiring key information of each operation unit extracted from the intermediate result graph; based on the key information, sequentially filling necessary fields in the preset workflow template of the corresponding operation unit; and generating a neural network according to the linear arrangement result and based on a preset workflow template corresponding to each operation unit.
The preset workflow template corresponding to each operation unit is set by the front end according to the target task. For example, for a defect identification task in an industrial scene, a user needs to detect corresponding components first and then classify corresponding different components respectively. Thus, the problem is decomposed into continuous training of the detection model and the classification model, and the data of the training classification model depends on the reasoning result of the detection model. Therefore, aiming at the scene that the target task is a defect identification task, a detection training workflow template and a detection evaluation workflow template related to the object detection model, and a detection training workflow template and a detection evaluation workflow template related to the image classification model are reserved at the front end of the model training platform.
The neural network may be a second panorama, and program code corresponding to the second panorama is used for being called by a task scheduling tool at the back end.
In some embodiments, the training module 12 is further configured to send configuration information of the training task to the resource module 11. In this way, the training module 12 is further configured to send the configuration information of the training task to the resource module 11, so that the configuration information of the training task can be obtained from the resource module 11 under the condition that the configuration information of the training task is needed in the future, and the configuration information of the training task can be effectively managed.
In the embodiment of the disclosure, the first panorama is converted into the at least two workflow templates in linear sequencing, and then the neural network matched with the at least two workflow templates in linear sequencing is determined, and the at least two workflow templates in linear sequencing can be connected with the first panorama and the neural network, so that the problem that the first panorama is difficult to convert into the neural network is solved, and the neural network is matched with the at least two workflow templates in linear sequencing, so that the neural network can realize ordered training of a complex scene.
Fig. 2 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure, and as shown in fig. 2, the task processing system 10 includes: a resource module 11, a training module 12 and an inference module 13.
The task processing system 10 further comprises an annotation module 14.
The labeling module 14 is configured to obtain an unlabeled sample set and labeled attribute information from the resource module 11, where the labeled attribute information is used to label at least some unlabeled samples in the unlabeled sample set;
the labeling module 14 is further configured to obtain a training sample set, and send the training sample set to the resource module 11; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network. The annotation attribute information may include at least one of: the label can be a single label, or a plurality of labels can be assigned to different labels, such as a single label, or a plurality of labels can be assigned to different labels.
In some embodiments, the annotation module 14 can obtain the user type, and obtain the unlabeled sample set and the labeled attribute information from the resource module 11 based on the user type.
In other embodiments, the training module 12 may receive the unlabeled sample set and the labeled attribute information sent by the user end device, send the unlabeled sample set and the labeled attribute information to the resource module 11, and the labeling module 14 may obtain the unlabeled sample set and the labeled attribute information from the resource module 11.
In still other embodiments, the training module 12 may receive an unlabeled sample set sent by the user end device, send the unlabeled sample set to the resource module 11, and the labeling module 14 may obtain the unlabeled sample set and the labeled attribute information from the resource module 11.
The labeling module 14 may communicate with a labeling end device, and the labeling module 14 may send an unlabeled sample set and labeled attribute information to the labeling end device, so that a labeling person of the labeling end device labels at least part of unlabeled samples in the unlabeled sample set based on the labeled attribute information to obtain a training sample set; or, the labeling end device labels at least part of unlabeled samples in the unlabeled sample set based on the labeling attribute information to obtain a training sample set.
The labeling module 14 may receive a training sample set sent by the labeling end device, and send the training sample set to the resource module 11, so that in the case of training, the training sample set in the resource module 11 is called.
In the embodiment of the present disclosure, because the labeling module 14 is provided, multi-role cooperation between the user and the labeling personnel can be realized, and different role requirements can be satisfied on the platform function. The labeling module 14 can perform labeling operations such as labeling, drawing, dividing and the like on the data set, and support functions such as task management and contractor management.
In the embodiment of the present disclosure, since the labeling module 14 can obtain the unlabeled sample set and the labeled attribute information from the resource module 11, a labeling person can label at least part of unlabeled samples in the unlabeled sample set based on the labeled attribute information to obtain a training sample set, and can label the samples in the unlabeled sample set in a targeted manner, so that the labeling pertinence is improved, and the accuracy of the trained neural network is improved; in addition, the user corresponding to the user type can be the same person or different persons as the marking person, so that the task can be subdivided, and the efficiency of the trained neural network is improved.
In some embodiments, the annotation module 14 is further configured to: acquiring a labeling operation interface from the resource module 11; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
The labeling module 14 may send the labeling operation interface to the labeling end device when the labeling operation interface is obtained, so that the labeling end device may label at least part of the unlabeled samples by using a labeling tool in the labeling operation interface.
In some embodiments, the annotation operation interface obtained by the annotation module 14 from the resource module 11 may be a set annotation operation interface, and the annotation operation interface does not change based on the change of the annotation attribute information. In other embodiments, the tagging operation interface acquired by the tagging module 14 from the resource module 11 may be determined based on the tagging attribute information, so that different tagging operation interfaces corresponding to different tagging attribute information may enable a tagging person to have more pertinence in tagging.
The resource module 11 may store a plurality of annotation operation interfaces, and the annotation module 14 may determine an annotation operation interface matching the annotation attribute information from the plurality of annotation operation interfaces. Alternatively, the resource module 11 may store a set tagging operation interface, and the tagging module 14 may determine, from the set tagging operation interface, a tagging operation interface matched with the tagging attribute information.
By the method, under the condition that the labeling attribute information comprises the labeling frame of which the shape is a rectangular frame, only the rectangular labeling frame exists in the determined labeling part in the labeling operation interface matched with the labeling attribute information, so that a labeling person can easily select the rectangular labeling frame from the labeling interface, and the labeling efficiency of the labeling person is improved.
In the embodiment of the present disclosure, the labeling module 14 can obtain the labeling operation interface from the resource module 11, so that a labeling person can label the unlabeled sample based on a labeling tool in the labeling operation interface, and thus the labeling format of the unlabeled sample can be matched with the labeling format in the sample for training the neural network, thereby reducing the occurrence of the condition that training cannot be performed due to the inconsistency between the labeling format and the required labeling format.
Fig. 3 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure, and as shown in fig. 3, the task processing system 10 includes: a resource module 11, a training module 12 and an inference module 13.
The task processing system 10 further comprises an evaluation module 15.
In some embodiments, the evaluation module 15 is configured to call at least two first test results in the resource module 11; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; and sending the evaluation result information to the resource module 11.
In some embodiments, the at least two first test sets may be test sets in different scenarios, and the adaptability of the trained neural network to different scenarios can be determined by testing the trained neural network with the test sets in different scenarios. In other embodiments, the at least two first test sets may be test sets in the same scenario, and the accuracy of the trained neural network may be determined by testing the trained neural network using the test sets in the same scenario.
In other embodiments, the evaluation module 15 is configured to call at least two second test results in the resource module 11; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module 11.
At least two of the sub-neural networks may be neural networks meeting requirements, for example, each sub-neural network may process the inference task to obtain an inference result. In some embodiments, the network structure of at least two sub-neural networks may be different. In other embodiments, the network structure of at least two sub-neural networks may be the same, and the network parameters may be different.
In some embodiments, the at least two sub-neural networks may be matched to the first panorama. For example, the scheduling module may determine at least two sub-neural networks that match the linearly ordered at least two workflow templates. In other embodiments, at least two sub-neural networks may be matched to configuration information of a user type and/or training task.
In some embodiments, the training module 12 may receive the first test sample set and/or the second test sample set sent by the user end device, and send the first test sample set and/or the second test sample set to the resource module 11, so that the evaluation module 15 invokes the first test sample set and/or the second test sample set.
In other embodiments, the resource module 11 may store a plurality of test sample sets, and the evaluation module 15 may select a first test sample set and/or a second test sample set from the plurality of test sample sets. For example, the evaluation module 15 may select a first test sample set and/or a second test sample set from the plurality of test sample sets that match the user type and/or the configuration information of the training task.
The evaluation module 15 may generate the display information based on the obtained evaluation result information when obtaining the evaluation result information. The evaluation module 15 may send the presentation information to the user end device, so that the user end device determines the evaluation result information based on the presentation information.
In the embodiment of the disclosure, evaluation result information of the trained neural network is determined based on at least two first test results, so that different test sample sets can be used for respectively testing the trained neural network, and the adaptability of the trained neural network to different test sample levels can be further determined; by respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results, a superior sub-neural network can be selected from the at least two sub-neural networks based on the at least two pieces of evaluation result information.
Fig. 4 is a schematic structural diagram of another task processing system provided in an embodiment of the present disclosure, and as shown in fig. 4, the task processing system 10 includes: a resource module 11, a training module 12 and an inference module 13.
The task processing system 10 further includes a scheduling module 16;
in some embodiments, the scheduling module 16 is configured to: and scheduling the training module 12 to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module 11.
In other embodiments, the scheduling module 16 is configured to: and scheduling the training module 12 to test the trained neural network by adopting at least two first test sample sets to obtain at least two first test results, and sending the at least two first test results to the resource module 11.
In some embodiments, the scheduling module 16 is configured to: and scheduling the training module 12 to respectively test at least two sub-neural networks included in the trained neural network by using a second test sample set to obtain at least two second test results, and sending the at least two second test results to the resource module 11.
In some embodiments, training module 12 includes an algorithm module, and scheduling module 16 may schedule a flow of training or testing for the algorithm module in the training module.
The algorithm module may train the neural network, and/or may test the trained neural network to obtain the trained neural network and/or the first test result and/or the second test result, and the algorithm module may send the trained neural network and/or the first test result and/or the second test result to the resource module 11. In other embodiments, the algorithm module may train the neural network, and/or may test the trained neural network to obtain the trained neural network and/or the first test result and/or the second test result, and the algorithm module may send the trained neural network and/or the first test result and/or the second test result to the scheduling module 16, so that the scheduling module 16 sends the trained neural network and/or the first test result and/or the second test result to the resource module 11.
In the embodiment of the present disclosure, the scheduling module 16 may schedule the algorithm module to train the neural network and test the trained neural network, so that the algorithm module may be specially used to implement the training and testing operations, thereby improving the pertinence of the algorithm module to process the task and further improving the processing efficiency of the task.
Fig. 5 is a schematic structural diagram of a task processing system according to another embodiment of the present disclosure, and as shown in fig. 5, the task processing system 10 may have a product layer, a service layer, and an algorithm layer.
On the product level, training-related modules, data-related modules, user-related modules, and reasoning-related modules can be deployed. The scheduling module 16 and the resource module 11 may be deployed on a service layer. Algorithm modules can be deployed on the algorithm layer.
Wherein the training related modules may include a panorama module, a fast training module, and an evaluation module 15. The panorama module is used for acquiring a first panorama and converting the first panorama into at least two workflow templates which are linearly ordered; determining the neural network that matches the linearly ordered at least two workflow templates. The quick training module is used for acquiring configuration information of a training task; determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module 11, and calling the sample in the resource module 11 to train the neural network. The evaluation module 15 is configured to test the trained neural network to obtain a first test result and/or a second test result.
In the embodiment of the disclosure, the panorama module is used for realizing a complete AI solution of a specific scene, and includes functions of model training, evaluation, inference logic concatenation and the like.
The rapid training module is designed for non-technical personnel, mainly meets the requirement of model training for the non-technical personnel, can select data and starts training by one key.
The evaluation module is used for realizing a model evaluation function and supporting a one-to-many and many-to-one evaluation result comparison method of the model and the data set.
The modules related to the data can comprise a resource management module, a manual marking module and an auxiliary marking module. The resource management module is used for receiving samples in the unlabelled sample set and/or the training sample set sent by the user end equipment and sending the samples in the unlabelled sample set and/or the training sample set to the resource module 11, and the resource management module can also receive a release task sent by the user end equipment, acquire a network meeting the test requirement corresponding to the release task from the resource module 11, generate release information based on the network meeting the test requirement, and send the release information to the user end equipment so that the user end equipment can display the release information. The manual labeling module may obtain the unlabeled sample set and the labeled attribute information from the resource module 11, send the unlabeled sample set and the labeled attribute information to the labeling end device, and receive the training sample set sent by the labeling end device. The auxiliary labeling module can obtain the unlabeled sample set, the labeled attribute information and the labeled operation interface from the resource module 11, send the unlabeled sample set, the labeled attribute information and the labeled operation interface to the labeling end device, and receive the training sample set sent by the labeling end device.
The resource management module can realize the management functions of all the resources such as models, data files, configuration files, reasoning application programs and the like in the system, including uploading, downloading, releasing, deleting and the like.
The modules associated with the user may include a single sign-on module, a user management module, and a rights management module. The single sign-on module receives login information (such as an account number, a password and the like) sent by the user terminal equipment to determine whether the user is legal. The user management module can manage the user. The right management module may be configured to manage the right associated with the user, for example, the right management module may determine right information of the logged-in user based on login information of the user.
The inference related module may comprise an inference module 13.
In the disclosed embodiment, training module 12 may include a panorama module, a fast training module, and an algorithm module.
In the disclosed embodiment, the scheduling module 16 is configured to process all workflow tasks and resource scheduling, including training tasks, evaluation tasks, data conversion tasks, and the like. The scheduling module 16 may be referred to as a workflow engine or ArgoEngine in other embodiments.
The resource module 11 is used for storing all resource files for reading and writing of all the upper-layer sub-products, and the resources managed by the resource module 11 may include necessary resources participating in model training.
In the embodiment of the disclosure, different products are designed according to different links in a product layer, each product solves a problem, and the different products are different application software; the service layer is responsible for maintaining and managing all public parts, all resources used in different links such as data, models, configuration and the like, in the service layer, the service layer includes a hard disk and a file system, and the service layer is also responsible for scheduling of a Graphics Processing Unit (GPU) (corresponding to an algorithm module in the training module in the above embodiments) and decoupling work among some product layer systems.
In the embodiment of the disclosure, when a user wants to customize an AI algorithm for his own service, the scheme of the disclosure can be directly utilized to complete the whole process. The user first uploads the data to the system of the present disclosure, and then completes the annotation of the data using the annotation system. For the labeled data, the user can directly use the labeled data for model training. For the trained model, the user can evaluate the model by using an evaluation center. And after the precision of the evaluation index display model meets the requirement, the user can issue the evaluation index display model to the inference center and complete the deployment, thereby realizing the full flow of customizing the AI algorithm model.
In conclusion, the system manages various data of the whole process of the neural network model production in a unified manner through the resource module 11, and multi-link contribution and transmission of the data are achieved. Meanwhile, cooperation of clients, annotating personnel, technicians and the like on the system is realized, so that production of AI algorithm models of various business scenes and generation of pipelines (Pipeline) of various business scenes are completed.
Fig. 6 is a schematic flowchart of a task processing method using a task processing system according to an embodiment of the present disclosure, and as shown in fig. 6, the task processing method may include the following steps:
step 1, a client sends an unlabelled sample set to a resource management module through user side equipment.
And 2, the resource management module stores the unlabeled sample set in the resource module.
And 3, the annotating personnel sends an annotation task aiming at the unmarked sample set to the annotation module through the annotation terminal equipment.
And 4, the labeling module acquires the labeled attribute information and the unlabeled sample set from the resource module.
And 5, storing the training sample set in the resource module by the marking module under the condition of acquiring the training sample set.
And 6, starting a training task to the panorama module by a technician through user equipment.
And 7, the panorama module acquires the first panorama and converts the first panorama into at least two workflow templates which are linearly sequenced.
In the embodiment of the present disclosure, the resource module may store a plurality of training workflow templates, and each training workflow template corresponds to a training template of one type of artificial intelligence model.
The training workflow template can be an example of a training frame, defines a set of general training frames and corresponding super-parameter and initial parameter configurations thereof, and can automatically complete model training in the system based on a training sample set to obtain a model. When the task processing system utilizes the workflow template to carry out model training, the corresponding network structure in the workflow template is loaded from the network warehouse of the resource module to be used as the network structure of the model training, then the task processing system utilizes the hyper-parameters defined in the workflow template and combines the data of the user to obtain the neural network, and the neural network is trained to obtain the trained neural network.
And 8, determining the neural network matched with the at least two linearly sequenced workflow templates by the panorama module, and sending the neural network to a scheduling module.
And 9, calling an algorithm module by the scheduling module to train the neural network and/or evaluate the trained neural network.
And step 10, the algorithm module stores the trained neural network and/or evaluation result in the resource module.
And 11, the technical personnel can send a test task to the evaluation module through the user terminal equipment.
And step 12, the evaluating module reads the evaluating result from the resource module, determines display information based on the evaluating result, and sends the display information to the user terminal equipment where the technical staff is located.
And step 13, the technical personnel can send a data analysis task to the data analysis module through the user end equipment.
The data analysis module may be included in the data-related modules described above.
And step 14, the data analysis module acquires data corresponding to the data analysis task from the resource module based on the data analysis task and feeds the data back to the user end equipment where the technical staff is located.
And step 15, the technical personnel send a release task to the resource management module through the user terminal equipment, so that the resource management module can determine release information based on the release task and feed back the release information to the user terminal equipment where the technical personnel are located.
Step 16, the inference module can obtain the trained neural network from the resource module, and determine the service port corresponding to the trained neural network.
And step 17, receiving data to be processed sent by the service system based on the service port, processing the data to be processed based on the trained neural network to obtain a processing result, and returning the processing result to the service system through the service port.
Fig. 7 is a schematic flowchart of a task processing method provided in an embodiment of the present disclosure, and as shown in fig. 7, the method is applied to a task processing system, and the task processing method may include:
s701, calling a sample in a resource module based on a user type to train a neural network in the resource module to obtain a trained neural network, and sending the trained neural network to the resource module.
S702, based on the task type of the reasoning task, calling the trained neural network in the resource module to process the reasoning task, obtaining a reasoning result and sending the reasoning result to the resource module.
In implementation, the resource module may be disposed in the task processing system, or may be disposed outside the task processing system and disposed independently from the task processing system.
In some embodiments, the method further comprises: acquiring configuration information of a training task under the condition that the user type is a first type; determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module, and sending the neural network to the resource module.
In some embodiments, the method further comprises: under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample; and determining the neural network corresponding to the first panoramic image, and sending the neural network to the resource module.
In some embodiments, the determining the neural network corresponding to the first panorama, sending the neural network to the resource module, comprises: converting the first panorama into at least two workflow templates in a linear order; determining the neural network that matches the linearly ordered at least two workflow templates.
In some embodiments, the method further comprises: acquiring an unlabelled sample set and labeled attribute information from the resource module, wherein the labeled attribute information is used for labeling at least part of unlabelled samples in the unlabelled sample set; acquiring a training sample set, and sending the training sample set to the resource module; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network.
In some embodiments, the method further comprises one of:
calling at least two first test results in the resource module; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; sending the evaluation result information to the resource module;
calling at least two second test results in the resource module; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module.
In some embodiments, the method further comprises one of:
scheduling the training module to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module;
scheduling the training module to adopt at least two first test sample sets to test the trained neural network to obtain at least two first test results, and sending the at least two first test results to the resource module;
and scheduling the training module to adopt a second test sample set, respectively testing at least two sub-neural networks included in the trained neural network to obtain at least two second test results, and sending the at least two second test results to the resource module.
In some embodiments, the configuration information of the training task comprises at least one of: network type information, network function information and network application scene information; the method further comprises the following steps: and sending the configuration information of the training task to the resource module.
In some embodiments, the method further comprises: determining a service port corresponding to the trained neural network; acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result; and outputting the processing result through the service port.
In some embodiments, the method further comprises: acquiring a labeling operation interface from the resource module; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
The task processing system executes the method in the embodiments of the present disclosure, and the processor or the chip of the task processing system may execute the method in the embodiments of the present disclosure.
The above description of the method embodiment is similar to the above description of the system embodiment, with similar beneficial effects as the system embodiment. For technical details not disclosed in the embodiments of the disclosed method, reference is made to the description of the embodiments of the disclosed system for understanding.
Each of the modules, processors, or chips described above may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. Each of the modules, processors or chips described above may include any one or a combination of at least two of the following: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an embedded neural Network Processing Unit (NPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above processor function may be other, and the embodiments of the present disclosure are not particularly limited.
The resource module may further include any one or a combination of at least two of: a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment of the present disclosure" or "a previous embodiment" or "some implementations" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" or "the presently disclosed embodiment" or "the foregoing embodiments" or "some implementations" or "some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
Unless otherwise specified, each module performs any step in the embodiments of the present disclosure, and the processor of each module may perform the step. Unless otherwise specified, the disclosed embodiments do not limit the order in which each module performs the following steps. In addition, the data may be processed in the same way or in different ways in different embodiments. It should be further noted that, in the embodiments of the present disclosure, each module may perform independently, that is, each module may perform any step in the above embodiments without depending on the performance of other steps.
In the description of the present disclosure, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present disclosure can be understood by those of ordinary skill in the art as appropriate.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some ports, indirect coupling or communication connection between devices or units, and may be electrical, mechanical or other.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The features disclosed in the several method embodiments provided in the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments.
Features disclosed in several of the product embodiments provided in this disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in this disclosure may be combined in any combination to arrive at a new method or apparatus embodiment without conflict.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units of the present disclosure may be stored in a computer storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
In the embodiments of the present disclosure, the descriptions of the same steps and the same contents in different embodiments may be mutually referred to. In the embodiment of the present disclosure, the term "and" does not affect the sequence of the steps, for example, each module executes a and executes B, where each module executes a first and then executes B, or each module executes B first and then executes a, or each module executes a and then executes B at the same time.
It should be noted that the drawings in the embodiments of the present disclosure are only for illustrating schematic positions of the respective modules on the task management system, and do not represent actual positions in the task management system, the actual positions of the respective modules or the respective areas may be changed or shifted according to actual situations (for example, the structure of the task management system), and the scale of different parts in the task management system in the drawings does not represent the actual scale.
As used in the disclosed embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be noted that, in the embodiments of the present disclosure, all the steps may be executed or some of the steps may be executed, as long as a complete technical solution can be formed.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (20)

1. A task processing system comprises a resource module, a training module and an inference module;
the training module is used for calling a sample in the resource module based on the user type to train the neural network in the resource module to obtain the trained neural network and sending the trained neural network to the resource module;
and the reasoning module is used for calling the trained neural network in the resource module to process the reasoning task based on the task type of the reasoning task to obtain a reasoning result and sending the reasoning result to the resource module.
2. The task processing system of claim 1, the training module further to:
acquiring configuration information of a training task under the condition that the user type is a first type;
determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module, and sending the neural network to the resource module.
3. The task processing system of claim 1, the training module further to:
under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample;
and determining the neural network corresponding to the first panoramic image, and sending the neural network to the resource module.
4. The task processing system of claim 3, the training module further to:
converting the first panorama into at least two workflow templates in a linear order;
determining the neural network that matches the linearly ordered at least two workflow templates.
5. The task processing system according to any of claims 1 to 4, further comprising an annotation module;
the labeling module is used for acquiring an unlabeled sample set and labeled attribute information from the resource module, wherein the labeled attribute information is used for labeling at least part of unlabeled samples in the unlabeled sample set;
the marking module is further used for acquiring a training sample set and sending the training sample set to the resource module; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network.
6. The task processing system according to any of claims 1 to 5, further comprising an evaluation module; the evaluation module is used for at least one of the following:
calling at least two first test results in the resource module; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; sending the evaluation result information to the resource module;
calling at least two second test results in the resource module; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module.
7. A task processing system according to any of claims 1 to 6, further comprising a scheduling module; the scheduling module is to at least one of:
scheduling the training module to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module;
scheduling the training module to adopt at least two first test sample sets to test the trained neural network to obtain at least two first test results, and sending the at least two first test results to the resource module;
and scheduling the training module to adopt a second test sample set, respectively testing at least two sub-neural networks included in the trained neural network to obtain at least two second test results, and sending the at least two second test results to the resource module.
8. The task processing system of claim 2, the configuration information for the training task comprising at least one of: network type information, network function information and network application scene information;
the training module is further configured to send configuration information of the training task to the resource module.
9. The task processing system of any of claims 1 to 8, the inference module further to:
determining a service port corresponding to the trained neural network;
acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result;
and outputting the processing result through the service port.
10. The task processing system of claim 5, the annotation module is further to:
acquiring a labeling operation interface from the resource module; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
11. A method of task processing, comprising:
calling a sample in a resource module based on a user type to train a neural network in the resource module to obtain a trained neural network and sending the trained neural network to the resource module;
and calling the trained neural network in the resource module based on the task type of the reasoning task to process the reasoning task to obtain a reasoning result and sending the reasoning result to the resource module.
12. The task processing method of claim 11, the method further comprising:
acquiring configuration information of a training task under the condition that the user type is a first type;
determining the neural network matched with the configuration information of the training task from a plurality of neural networks in the resource module, and sending the neural network to the resource module.
13. The task processing method of claim 11, the method further comprising:
under the condition that the user type is a second type, acquiring a first panoramic image; the first panorama comprises at least two operation units and at least one resource unit corresponding to each operation unit; the at least one resource unit is associated with the sample;
and determining the neural network corresponding to the first panoramic image, and sending the neural network to the resource module.
14. The task processing method of claim 13, wherein the determining the neural network corresponding to the first panorama, and sending the neural network to the resource module, comprises:
converting the first panorama into at least two workflow templates in a linear order;
determining the neural network that matches the linearly ordered at least two workflow templates.
15. The task processing method according to any one of claims 11 to 14, the method further comprising:
acquiring an unlabelled sample set and labeled attribute information from the resource module, wherein the labeled attribute information is used for labeling at least part of unlabelled samples in the unlabelled sample set;
acquiring a training sample set, and sending the training sample set to the resource module; the training sample set is obtained by labeling at least part of unlabeled samples based on the labeling attribute information, and the samples in the training sample set are used for training the neural network.
16. A task processing method according to any of claims 11 to 15, the method further comprising one of:
calling at least two first test results in the resource module; the at least two first test results are obtained by testing the trained neural network by adopting at least two first test sample sets; determining evaluation result information of the trained neural network based on the at least two first test results; sending the evaluation result information to the resource module;
calling at least two second test results in the resource module; the at least two second test results are obtained by adopting a second test sample set and respectively testing at least two sub-neural networks included in the trained neural network; respectively determining at least two pieces of evaluation result information respectively corresponding to the at least two second test results; and sending the at least two evaluation results to the resource module.
17. A task processing method according to any of claims 11 to 16, the method further comprising one of:
scheduling the training module to train the neural network to obtain the trained neural network and sending the trained neural network to the resource module;
scheduling the training module to adopt at least two first test sample sets to test the trained neural network to obtain at least two first test results, and sending the at least two first test results to the resource module;
and scheduling the training module to adopt a second test sample set, respectively testing at least two sub-neural networks included in the trained neural network to obtain at least two second test results, and sending the at least two second test results to the resource module.
18. The task processing method of claim 12, wherein the configuration information of the training task comprises at least one of: network type information, network function information and network application scene information; the method further comprises the following steps:
and sending the configuration information of the training task to the resource module.
19. A task processing method according to any one of claims 11 to 18, the method further comprising:
determining a service port corresponding to the trained neural network;
acquiring data to be processed based on the service port, and processing the data to be processed based on the trained neural network to obtain a processing result;
and outputting the processing result through the service port.
20. The task processing method of claim 15, the method further comprising:
acquiring a labeling operation interface from the resource module; the marking operation interface comprises a marking tool, and the marking tool is used for marking the at least part of unmarked samples.
CN202110856820.0A 2021-07-28 2021-07-28 Task processing system and task processing method Pending CN113590286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110856820.0A CN113590286A (en) 2021-07-28 2021-07-28 Task processing system and task processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110856820.0A CN113590286A (en) 2021-07-28 2021-07-28 Task processing system and task processing method

Publications (1)

Publication Number Publication Date
CN113590286A true CN113590286A (en) 2021-11-02

Family

ID=78251348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110856820.0A Pending CN113590286A (en) 2021-07-28 2021-07-28 Task processing system and task processing method

Country Status (1)

Country Link
CN (1) CN113590286A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156202A1 (en) * 2016-05-02 2019-05-23 Scopito Aps Model construction in a neural network for object detection
CN111047563A (en) * 2019-11-26 2020-04-21 深圳度影医疗科技有限公司 Neural network construction method applied to medical ultrasonic image
CN111160555A (en) * 2019-12-26 2020-05-15 北京迈格威科技有限公司 Processing method and device based on neural network and electronic equipment
CN111882059A (en) * 2020-07-17 2020-11-03 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN112559753A (en) * 2021-01-14 2021-03-26 南京大学 Management framework of natural language text processing and analyzing task based on business process management technology
CN112559860A (en) * 2020-12-10 2021-03-26 奥佳华智能健康科技集团股份有限公司 Massage program intelligent recommendation method and system based on deep learning
CN112711409A (en) * 2019-10-25 2021-04-27 杭州海康威视数字技术股份有限公司 Application program development and operation method and system and intelligent analysis equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156202A1 (en) * 2016-05-02 2019-05-23 Scopito Aps Model construction in a neural network for object detection
CN112711409A (en) * 2019-10-25 2021-04-27 杭州海康威视数字技术股份有限公司 Application program development and operation method and system and intelligent analysis equipment
CN111047563A (en) * 2019-11-26 2020-04-21 深圳度影医疗科技有限公司 Neural network construction method applied to medical ultrasonic image
CN111160555A (en) * 2019-12-26 2020-05-15 北京迈格威科技有限公司 Processing method and device based on neural network and electronic equipment
CN111882059A (en) * 2020-07-17 2020-11-03 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN112559860A (en) * 2020-12-10 2021-03-26 奥佳华智能健康科技集团股份有限公司 Massage program intelligent recommendation method and system based on deep learning
CN112559753A (en) * 2021-01-14 2021-03-26 南京大学 Management framework of natural language text processing and analyzing task based on business process management technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林城龙;胡伟;李瑞瑞;: "基于深度卷积神经网络的层次多任务服装分类", 中国体视学与图像分析, no. 02, pages 39 - 45 *

Similar Documents

Publication Publication Date Title
CN110288049B (en) Method and apparatus for generating image recognition model
JP2021099852A (en) Method and apparatus for minimization of false positive in facial recognition application
CN109740018B (en) Method and device for generating video label model
CN106980868A (en) Embedded space for the image with multiple text labels
CN111012261A (en) Sweeping method and system based on scene recognition, sweeping equipment and storage medium
CN106980867A (en) Semantic concept in embedded space is modeled as distribution
CN109145828B (en) Method and apparatus for generating video category detection model
CN109947989B (en) Method and apparatus for processing video
CN113723513B (en) Multi-label image classification method and device and related equipment
CN112346845B (en) Method, device and equipment for scheduling coding tasks and storage medium
CN110472558B (en) Image processing method and device
CN113392236A (en) Data classification method, computer equipment and readable storage medium
KR20230013280A (en) Classify and discover client application content
CN111158884A (en) Data analysis method and device, electronic equipment and storage medium
CN112153422B (en) Video fusion method and device
CN113191479A (en) Method, system, node and storage medium for joint learning
CN109816023B (en) Method and device for generating picture label model
US11048745B2 (en) Cognitively identifying favorable photograph qualities
US11373057B2 (en) Artificial intelligence driven image retrieval
CN106777066B (en) Method and device for image recognition and media file matching
CN110415318B (en) Image processing method and device
CN116863116A (en) Image recognition method, device, equipment and medium based on artificial intelligence
CN113590286A (en) Task processing system and task processing method
WO2022156468A1 (en) Method and apparatus for processing model data, electronic device, and computer-readable medium
CN112800235B (en) Visual knowledge graph data modeling method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination