CN115686280A - Deep learning model management system, method, computer device and storage medium - Google Patents

Deep learning model management system, method, computer device and storage medium Download PDF

Info

Publication number
CN115686280A
CN115686280A CN202110849991.0A CN202110849991A CN115686280A CN 115686280 A CN115686280 A CN 115686280A CN 202110849991 A CN202110849991 A CN 202110849991A CN 115686280 A CN115686280 A CN 115686280A
Authority
CN
China
Prior art keywords
training
deep learning
learning model
samples
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110849991.0A
Other languages
Chinese (zh)
Inventor
路元元
柴栋
雷一鸣
王洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing Zhongxiangying Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing Zhongxiangying Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing Zhongxiangying Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202110849991.0A priority Critical patent/CN115686280A/en
Publication of CN115686280A publication Critical patent/CN115686280A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the invention discloses a deep learning model management system, a deep learning model management method, computer equipment and a storage medium. In one embodiment, the system comprises: the system comprises a labeling unit, a labeling unit and a labeling unit, wherein the labeling unit is used for responding to the operation of a data set management control in a main control bar to display a data set management interface, responding to the operation of a labeling control of the data set management interface to display a labeling task distribution interface, responding to the operation of a labeling task distribution control of the labeling task distribution interface to issue a labeling task and acquire a training set, and the training set comprises a plurality of training samples labeled with labels; the training unit is used for training at least one deep learning model according to the training set; and the issuing unit is used for issuing the trained deep learning model for the user to use on line. The implementation method can construct a deep learning platform closed-loop system with a full life cycle, and the model is used by a user on the system.

Description

Deep learning model management system, method, computer device and storage medium
Technical Field
The invention relates to the technical field of deep learning. And more particularly, to a deep learning model management system, method, computer device, and storage medium.
Background
At present, for the management of a deep learning model, the degree of automation and the degree of intelligence are low, resources cannot be effectively integrated, and efficient and high-quality management of the deep learning model cannot be realized.
Disclosure of Invention
An object of the present invention is to provide a deep learning model management system, method, computer device and storage medium, so as to solve at least one of the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
the first aspect of the present invention provides a deep learning model management system, including:
the system comprises a marking unit, a data set management control unit and a data set distribution unit, wherein the marking unit is used for responding to the operation of a data set management control in a main control bar to display a data set management interface, responding to the operation of a marking control of the data set management interface to display a marking task distribution interface, responding to the operation of the marking task distribution control of the marking task distribution interface to issue a marking task and acquire a training set, and the training set comprises a plurality of training samples marked with labels;
the training unit is used for training at least one deep learning model according to the training set;
and the issuing unit is used for issuing the trained deep learning model for the user to use on line.
According to the deep learning model management system provided by the first aspect of the invention, by arranging the marking unit, the training unit and the publishing unit, at least one deep learning model can be obtained by training a plurality of training samples marked with labels, and the trained deep learning model is published to construct a deep learning platform closed-loop system with a full life cycle, so that the model is online in the system for a user to use.
Optionally, the annotation unit is further configured to display a verification setting interface in response to an operation on a verification control of the data set management interface, determine a verification parameter in response to an operation on the verification setting control of the verification setting interface, determine an annotation quality standard in response to an operation on an annotation quality standard setting control of the annotation task allocation interface, and issue a verification task in response to an operation on the verification task allocation control of the annotation task allocation interface.
Optionally, the verification parameters include a verification mode, and also include a verification quantity or a verification proportion.
Optionally, the labeling unit is configured to obtain a training set, where the labeling task corresponds to a part of the training set, and the labeling unit includes:
displaying an intelligent labeling interface in response to an operation on a labeling tool control in a main control bar, and in response to an operation on an intelligent labeling control of the intelligent labeling interface: acquiring a sample set, wherein the sample set comprises part of training samples marked with labels and samples to be expanded except the training samples, and the part of training samples marked with the labels are from the part of training set; training a first deep learning model according to the training samples; inputting part of samples in the samples to be expanded into a first deep learning model obtained through training for reasoning so as to obtain labels of the part of samples in the samples to be expanded; judging whether the reasoning accuracy of the first deep learning model obtained by training meets a first preset requirement: if not, correcting the labels of part of samples in the samples to be expanded, expanding the training samples according to the corrected part of samples in the samples to be expanded, and switching to the first deep learning model trained according to the training samples; and if so, inputting the rest samples in the samples to be expanded into the first deep learning model obtained by training for reasoning so as to obtain the labels of the rest samples in the samples to be expanded, and thus obtaining the labels of all samples in the sample set so as to obtain a training set.
According to the alternative mode, a deep learning model is obtained through training in a staged mode by using training samples with labels partially marked, the data samples are expanded by using the deep learning model obtained through the staged training, the deep learning model is further trained according to the expanded data samples, the deep learning model is executed in a circulating and reciprocating mode to finally obtain the deep learning model with the reasoning accuracy meeting the requirement, then the deep learning model with the reasoning accuracy meeting the requirement is used for reasoning the labels of the samples without the labels in the sample set to obtain a training set, and a large number of sample sets can be obtained fully.
Optionally, the system further comprises:
the monitoring management unit is used for judging whether the inference accuracy rate of the deep learning model used by the user in use meets a second preset requirement: if not, the deep learning model is offline;
and the training unit is also used for further training the offline deep learning model according to the subsequently expanded training set.
According to the optional mode, whether the deep learning model in the deep learning model management system can be normally applied or not can be monitored, when the model is abnormally applied, the model is offline, and the offline deep learning model is continuously trained according to a subsequent extended training set so as to optimize the offline deep learning model.
Optionally, the monitoring management unit is further configured to display a model inference interface in response to an operation on an inference center control in the main control bar, and enable the training unit to further train the corresponding trained deep learning model according to a subsequent extended training set when the labeling unit subsequently extends the training set in response to an operation on an automatic update control of a corresponding trained deep learning model entry of the model inference interface.
In this alternative, further training may be automatically initiated to optimize the deep learning model corresponding to the extended training set.
Optionally, the labeling unit is configured to obtain a plurality of training sets;
and the training unit is used for training at least one corresponding deep learning model according to each training set.
In this alternative, training sets in multiple domains may be obtained, and multiple deep learning models may be trained from the training set corresponding to each domain.
Optionally, the training unit is configured to train a plurality of deep learning models according to the training set.
In this alternative, it is possible to implement training of deep learning models in multiple fields, respectively, to obtain multiple deep learning models.
Optionally, the plurality of deep learning models belong to at least two deep learning frameworks.
By the alternative mode, the scheme of using unified specifications such as model conversion, model application and the like among different frameworks can be realized.
The second aspect of the present invention provides a deep learning model management method, which is applied to a terminal device, and the method includes:
the method comprises the steps of responding to the operation of a data set management control in a main control column to display a data set management interface, responding to the operation of a labeling control of the data set management interface to display a labeling task distribution interface, responding to the operation of the labeling task distribution control of the labeling task distribution interface to issue a labeling task, and acquiring a training set, wherein the training set comprises a plurality of training samples labeled with labels;
training at least one deep learning model according to the training set;
and releasing the trained deep learning model, and using the model on an upper line for a user.
The deep learning model management method provided by the second aspect of the invention obtains at least one deep learning model by training a plurality of training samples marked with labels, and releases the trained deep learning model for the user to use on line, thereby realizing the construction of a deep learning platform closed-loop system with a full life cycle.
A third aspect of the invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the second aspect of the invention when executing the program.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method according to the second aspect of the invention.
The invention has the following beneficial effects:
according to the technical scheme, the marking unit, the training unit and the publishing unit are arranged, a plurality of training samples marked with labels are trained to obtain at least one deep learning model, the trained deep learning model is published to construct a deep learning platform closed-loop system with a full life cycle, and the model is online in the system for a user to use. Furthermore, by integrating the learning platforms with different depths at the bottom layer, a data management system is constructed, marking tools and marking standards are unified, a flow model training scheme is constructed, and a one-stop model deployment and maintenance scheme is constructed, so that the purposes of online and offline model optimization, rapid deployment and efficient monitoring can be realized.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 illustrates an exemplary system architecture diagram in which an embodiment of the present invention may be applied.
FIG. 2 is a schematic diagram of the core functional architecture of the deep learning model management platform according to an embodiment of the present invention.
FIG. 3 illustrates a dataset management host interface schematic of a master control bar according to an embodiment of the invention.
FIG. 4 is a diagram illustrating an import picture interface according to an embodiment of the present invention.
FIG. 5 illustrates a task assignment interface of one embodiment of the present invention
FIG. 6 is a diagram illustrating a data annotation process according to an embodiment of the present invention.
FIG. 7 illustrates a verification data interface diagram according to an embodiment of the invention.
FIG. 8 is a diagram illustrating a process for publishing a data set according to an embodiment of the invention.
FIG. 9 is a diagram illustrating a tagging tool main interface of a main control bar according to an embodiment of the present invention.
FIG. 10 is a diagram illustrating an annotation format setup window according to an embodiment of the invention.
FIG. 11 is a diagram illustrating an image intelligent annotation interface according to an embodiment of the invention.
FIG. 12 shows a schematic diagram of a training center process of one embodiment of the present invention.
FIG. 13 illustrates a training center primary interface diagram for a primary control bar of an embodiment of the present invention.
FIG. 14 shows a schematic diagram of model online and optimization according to an embodiment of the invention.
FIG. 15 illustrates an inference center primary interface schematic of a primary control bar, according to an embodiment of the invention.
FIG. 16 shows an architectural diagram of a deep learning platform of an embodiment of the invention.
FIG. 17 shows a workflow diagram of one embodiment of the present invention.
Fig. 18 is a schematic structural diagram of a computer system implementing the apparatus provided in the embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the present invention, the present invention is further described below with reference to the following examples and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the deep learning model management system of the present invention may be applied.
As shown in fig. 1, the system architecture 100 may include first terminal devices 101, 102, 103, servers 104, 105, 106, and second terminal devices 107, 108, 109. Wherein the communication between the first terminal device 101, 102, 103 and the server 104 and between the second terminal device 107, 108, 109 and the server 105 may be provided via a network or may be achieved via a local connection. The network may include various connection types, such as wired, wireless communication links or fiber optic cables, 4G networks or 5G networks, and so forth. Similarly, communication between server 104 and server 105, and between server 104 and server 106 may also be provided through a network or through local connections.
The user may interact with the servers 104, 105 over the network using the first terminal devices 101, 102, 103 or the second terminal devices 107, 108, 109, respectively, to receive or send messages or the like. The first terminal device 101, 102, 103 or the second terminal device 107, 108, 109 may be installed with various communication client applications, such as a business management application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The first terminal devices 101, 102, 103 and the second terminal devices 107, 108, 109 may be hardware or software. When the first terminal devices 101, 102, 103 and the second terminal devices 107, 108, 109 are hardware, they may be various electronic devices having a display screen and supporting service management, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the first terminal devices 101, 102, 103 and the second terminal devices 107, 108, 109 are software, they can be installed in the electronic devices listed above. It may be implemented as a plurality of software or software modules (for example to provide distributed services) or as a single software or software module. And is not particularly limited herein.
The server 104 may be a server providing various services, for example receiving training samples generated by the first terminal devices 101, 102, 103 and sending them to other servers. The training set may include the labels of all samples in the sample set and the training samples to be labeled, generated by the first terminal device 101, 102, 103.
The server 105 may be a server that provides various services, for example, to provide support for a deep learning model management application of the second terminal devices 107, 108, 109, implement a function of training deep learning models in various fields, and provide an application program interface service, for example, an API service, for a deep learning model management platform.
The server 106 may be a server providing various services, such as a server for supporting local servers such as Linux, windows, SQL database, and the like, and a server supporting cloud service providers such as airy cloud, AWS, and the like; the server 106 may also be used to support a one-key deployment function of the deep learning model management platform environment to save user environment deployment latency.
The server needs to display the CPU utilization rate, the memory utilization rate, the hard disk utilization rate, the uplink bandwidth, the downlink bandwidth, the resource utilization rate, the load details, the disk IO, the network IO, and the like, wherein the GPU server also needs to display the GPU task volume, the GPU utilization rate, the GPU processing rate, and the like. The deep learning model management platform monitors the performance and the running state of the server according to the safety threshold values set by the servers 104, 105 and 106, and timely predicts the downtime danger of the server and the like so as to ensure the normal running of the platform.
The servers 104, 105, and 106 may be hardware or software. When the servers 104, 105, and 106 are hardware, they may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the servers 104, 105, 106 are software, they may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices and servers in fig. 1 is merely illustrative. There may be any suitable number of terminal devices and servers, as desired for implementation.
One embodiment of the present invention provides a deep learning model management system, which may also be referred to as a deep learning model management platform, including:
the system comprises a marking unit, a data set management control unit and a data set distribution unit, wherein the marking unit is used for responding to the operation of a data set management control in a main control bar to display a data set management interface, responding to the operation of a marking control of the data set management interface to display a marking task distribution interface, responding to the operation of the marking task distribution control of the marking task distribution interface to issue a marking task and acquire a training set, and the training set comprises a plurality of training samples marked with labels;
the training unit is used for training at least one deep learning model according to the training set;
and the issuing unit is used for issuing the trained deep learning model for the user to use on line.
In a specific example, the deep learning model management platform can be applied to various fields which need to input a large number of samples and obtain the deep learning model in the training process, such as the fields of unmanned driving, medicine, face recognition, speech understanding and translation and the like.
The deep learning model management system provided by the embodiment of the invention is provided with the marking unit, the training unit and the releasing unit, can obtain at least one deep learning model by training a plurality of training samples marked with labels, and releases the trained deep learning model to construct a deep learning platform closed-loop system with a full life cycle, and the model is online in the system for a user to use.
In a specific example, fig. 2 shows a core functional architecture diagram of a deep learning model management platform, which mainly includes a data layer, a product layer and a solution layer, wherein the data layer supports and can parse data formats such as pictures, videos, texts, CT pictures, infrared images and audio; the product layer comprises a training system, a data management module, a marking tool module, a training center module, a model market module, an inference center module, a server monitoring module and the like; the scheme layer comprises modules for detection/classification, video monitoring, a logistics system, auxiliary diagnosis and treatment, X-ray machine foreign matter detection and the like.
In the embodiment, the main control bar list of the deep learning model management platform is positioned in the left area of the deep learning model management platform, and the partial area is displayed all the time but does not disappear along with interface jumping; after clicking a control bar of the main control bar list by a mouse, transforming the interface of the display area of the lower right part except the main control bar on the left side and the deep learning model management platform bar on the upper side; the master control bar list comprises server resource management, user management, data set management, a marking tool, a model market, a training center, an inference center and a training system.
The server resource management mainly comprises submodules of server performance monitoring, service load monitoring, GPU node monitoring, network state monitoring, network IO monitoring, disk IO state monitoring and the like, and is mainly used for detecting the performance of a server incorporated into a deep learning model management platform, dynamically managing computing resources, ensuring the normal operation of the bottom layer of a system and providing basic environment guarantee.
The user management comprises an administrator account, an inspector account, a marker account, a model administrator and a common account, wherein the administrator account is responsible for the management and monitoring of the server and mainly comprises account adding, task distributing, account activating, account deactivating and the like; the inspector account is responsible for data uploading, management, task allocation, labeling, evaluation of labeled data quality and the like; the annotator account is responsible for annotating the task data; the model administrator is responsible for training, testing, deploying, online and the like of the model; the common account number can refer to the current data state, the server state, the model state and the like, but does not have the modification authority.
The data collection management mainly comprises sub-modules of task management, data annotation, data management, data quality and the like, and aims to enable a user to effectively organize data, more efficiently manage the data, evaluate the data quality and prepare for subsequent model training.
In one particular example, data set management can also implement functions to view data, import data, export data, task assignment, check data, publish data, de-train, delete data, data version iteration, and the like. FIG. 3 shows a display interface of data set management in a main control bar of a deep learning model management platform, wherein the upper left corner displays "+ new construction", and a mouse clicks the position, so that a data set group can be newly built. The display interface further displays the created dataset group ID and specific information included in the dataset group, for example, an entry including version, data volume, latest import status, label type, dataset quality, release status, operation, and the like, where one entry displays information such as version number, number of data volume, label type, percentage of label status, percentage of dataset quality, release status, and operation, corresponding to the dataset group. The operation comprises data checking, data importing, data exporting, task distributing, data checking, data publishing, data training removing and data deleting, and the interface can be switched to the operation interface by clicking the corresponding operation with a mouse. The display interface also displays the number of records in the data set group and the page number corresponding to the specific data of the number of records.
When a mouse clicks the imported data operation of a certain item under the data set management interface, the interface is switched to an imported data operation interface, for example, an imported picture interface as shown in fig. 4, the interface comprises an created information part, a labeled information part, a data cleaning part and an imported data part, wherein the created information part comprises data set ID information, version number information and remarks, and small pens beside the remarks are clicked to remark the data set; the marking information part comprises a marking type, a marking template, a data total amount, marked labels, the number of targets, the size of a picture to be confirmed; the data cleaning comprises whether the data set is subjected to a data cleaning task or not; the imported data information comprises a data annotation state and an import mode, for example, the data annotation state is label-free information or label information, and the import mode is a local data set, an online data set, imported camera data or data obtained by cloud service calling.
When a mouse clicks a task allocation operation of a certain item under the data set management interface, the interface is switched to a task allocation interface as shown in fig. 5, the task is allocated by allocating data labeling tasks to different roles, such as a labeler and an inspector, and ensuring that the labeler and the inspector have data overlapping amount, and the overlapping data is used for evaluating the quality of data labeled by the labeler role subsequently. The task allocation interface comprises a total data task needing to be input, a data quality standard needing to be selected and an IOU value needing to be input, wherein the IOU is a result obtained by dividing a part where two regions are overlapped by a part where the two regions are aggregated, and in the embodiment, the data overlapping amount in the roles of a annotator and an inspector is divided by the sum of data marked in the roles of the annotator and the inspector. The data tasks needing to be marked are determined after the total data tasks are input, the data quality standard is selected and the IOU is input, then users and roles can be selected in a task allocation interface and specific tasks of the users and the roles can be input, a plurality of users can be added or deleted, and the user roles can be marking personnel or inspector.
In one specific example, as shown in FIG. 6, the data annotation process is as follows:
firstly creating 100000 image samples in a labeled data set, namely importing 100000 images into a deep learning model management platform, confirming a labeled data quality standard on a task allocation interface, then allocating tasks, randomly allocating 5000 image samples to be labeled images by a data management module of the deep learning model management platform, allocating 5000 image samples to be labeled to a plurality of labels and an inspector, such as a label A, a label B and an inspector A, labeling by the plurality of labels and the inspector according to a uniform labeling mode and format, ensuring the data coincidence quantity in the roles of the labels and the inspectors, verifying data quality intersection, if the data quality does not accord with the requirement, rechecking the low-quality data set, returning to the data quality intersection verification, if the data quality accords with the requirement, directly judging manual labeling and model labeling, if the manual labeling is selected, judging whether the data set is completely labeled, entering the process of the plurality of labels and the inspectors if not finished, and finishing the process of manual labeling; if the model marking is selected, the model needs to be trained, and then the model marking is carried out by utilizing the trained model.
Before the trained model is used for carrying out model marking, the marking made by a marking person needs to be checked, and the data inspection is to artificially judge whether data sampling measurement needs to be carried out or not according to the data quality.
In a possible implementation manner, the labeling unit is further configured to display a verification setting interface in response to an operation on a verification control of the data set management interface, determine a verification parameter in response to an operation on the verification setting control of the verification setting interface, determine a labeling quality standard in response to an operation on a labeling quality standard setting control of the labeling task allocation interface, and issue a verification task in response to an operation on a verification task allocation control of the labeling task allocation interface.
In a possible implementation manner, the check parameter includes a check manner, and further includes a check quantity or a check proportion. In one specific example, the verification data is divided into automatic verification and random sampling, wherein,
an example of an autoverification approach is as follows:
the method comprises the following steps that a picture marking task with the overlapped data volume of a =500 is distributed to a marker and an inspector, the marker and the inspector finish marking the overlapped data volume with b =300, the number of pictures marked by the marker and the inspector to be consistent is c1=200, and the accuracy of the marker is Q: q = c1/b =200/300=66.7%, as the number of labels shared by the annotator and the inspector increases, the accuracy Q is a dynamically changing process, and finally, the annotator and the inspector complete the labeling of all the coincidence data a =500, and the annotator and the inspector label a consistent number of pictures c2=400, then the accuracy of the annotator is: q = c2/a =400/500=80%.
An example random sample test mode is as follows:
randomly extracting a certain number of pictures, for example, 500 pictures, from the data marked by the marker, judging, by an inspector, the 500 pictures marked by the marker, and verifying whether the marking of the marker is correct, for example, if the inspector verifies that 400 pictures in the 500 pictures marked by the marker are consistent with the judgment of the inspector, the accuracy of the marker is Q: q =400/500=80%.
In a specific example, when a mouse clicks the operation of checking data of a certain item under the data set management interface, the interface is linked to the checking data interface shown in fig. 7, and the interface includes selection of an import mode, total data amount and labeled amount; still include the information of annotating people, mark/coincidence volume, data quality, drawing survey, fixed quantity and fixed proportion this item, wherein, there is the linkage between fixed quantity and the fixed proportion, promptly in the inspection data stage of this embodiment, satisfy one between fixed quantity and the fixed proportion can.
The data set can be published after the data set is verified to be qualified, whether the data set is published or not can be determined according to information such as data quality and total labeled data, in a specific example, as shown in fig. 8, the data set is cleaned firstly, then the data set is verified, if the data set is not verified, the data set cleaning step is returned until the data set is verified to be qualified, the link of publishing the data set is entered, and when the quality and the quantity of the data set meet the publishing requirement at the same time, a production model deployed by a deep learning model management platform automatically trains a model according to the published data set and associates and automatically updates the model to improve the autonomous training capacity of the model.
In a possible implementation manner, the labeling task corresponds to a part of a training set, and the labeling unit is configured to obtain the training set and includes:
displaying an intelligent labeling interface in response to an operation on a labeling tool control in a main control bar, and in response to an operation on an intelligent labeling control of the intelligent labeling interface: obtaining a sample set, wherein the sample set comprises part of training samples marked with labels and samples to be expanded except the training samples, and the part of training samples marked with the labels are from the part of training set; training a first deep learning model according to the training samples; inputting part of samples to be expanded into a first deep learning model obtained through training for reasoning so as to obtain labels of the part of samples to be expanded; judging whether the reasoning accuracy of the first deep learning model obtained by training meets a first preset requirement: if not, correcting the labels of part of samples in the samples to be expanded, expanding the training samples according to the corrected part of samples in the samples to be expanded, and switching to the first deep learning model trained according to the training samples; and if so, inputting the rest samples in the samples to be expanded into the first deep learning model obtained by training for reasoning so as to obtain the labels of the rest samples in the samples to be expanded, and thus obtaining the labels of all samples in the sample set so as to obtain a training set.
In one specific example, the sample set contains about 100000 image samples, wherein 5000 training samples labeled with labels and 95000 to-be-labeled samples not labeled with labels; training a first deep learning model according to 5000 training samples marked with labels; inputting 1000 samples of 95000 samples to be labeled which are not labeled into a first deep learning model obtained by training for reasoning to obtain the labels of the 1000 samples; judging whether the inference accuracy of the first deep learning model obtained by training meets a first preset requirement, for example, the inference accuracy is greater than or equal to 80%, if not, correcting the labels of the 1000 samples, and expanding the training samples according to the 1000 samples after correction, namely 5000 training samples marked with the labels and the 1000 samples after correction, training to obtain the first deep learning model, inputting 2000 samples of 94000 samples to be marked without the labels into the first deep learning model obtained by training for inference to obtain the labels of the 2000 samples, judging whether the inference accuracy of the first deep learning model obtained by training is greater than or equal to 80%, and so on until the inference accuracy of the first deep learning model obtained by training is greater than or equal to 80%, inputting the rest samples in the samples to be expanded into the first deep learning model obtained by inference to obtain the labels of the rest samples in the sample set, thereby obtaining 100000 image samples in the sample set to obtain a training set of labels.
According to the implementation mode, a deep learning model is obtained through periodic training of part of training samples marked with labels, the deep learning model obtained through the periodic training is used for expanding data samples, the deep learning model is further trained according to the expanded data samples, the deep learning model is executed in a circulating and reciprocating mode to finally obtain the deep learning model with the reasoning accuracy meeting the requirement, the deep learning model with the reasoning accuracy meeting the requirement is used for reasoning the labels of the samples which are not marked with the labels in the sample set to obtain the training set, and a large number of sample sets can be obtained fully.
The labeling tool mainly comprises image classification, image segmentation, video labeling, NLP labeling and other sub-modules, and is mainly used for effectively labeling data of different types, unifying labeling modes and formats, providing labeling information for training and providing no standard for subsequent model evaluation. The data labeling process can dynamically display information such as manual labeling, intelligent labeling, to-be-completed information, completion rate information, data set quality information and the like. The labeling process and the quality process of the rechecking data set are shown in the following figures, and the data quality is ensured through role division and data auditing process standardization, so that basic data guarantee is provided for model training. And the intelligent labeling process can rapidly improve the labeling speed of the data and improve the labeling efficiency.
The deep learning model management platform can provide different marking tools aiming at different types of marking modes such as image classification, image segmentation, video marking, text marking and the like, and effectively mark data of data types such as pictures, videos, texts, CT pictures, infrared images, audios and the like contained in a data layer of the deep learning model management platform. In a specific example, the deep learning model management platform supports image annotation, text annotation, video annotation, 3D data annotation, and the like, and can be applied to various scenes such as image classification, multi-target detection, single-target detection, and the like. As shown in fig. 9, clicking the "annotation tool" of the main control bar displays the annotation tool main interface shown in fig. 9. And the annotation tool interface displays online annotation of image annotation, text annotation or video annotation.
The annotation unit comprises an annotation tool, the annotation mode and format need to be unified in the annotation process, the annotation format setting window shown in fig. 10 is displayed after the image annotation of fig. 9 is clicked, for example, the annotation format of the image annotation is selected as a test1 data set, a Pascal voc 2013 annotation format, and a storage format with a suffix of json is selected, so that annotation information is provided for the training set, and a standard is provided for subsequent model evaluation. Clicking the intelligent annotation homepage shown in the figure 10 after the click confirmation is set to display the image intelligent annotation interface shown in the figure 11, clicking the creation intelligent annotation task in the figure 11 by a mouse, displaying the version, the data set ID, the data set name, the version, the intelligent annotation state and the operation item in the figure 11 on the image intelligent annotation interface, clicking the intelligent annotation below the operation, and intelligently annotating the data under the item; the intelligent image annotation interface shown in fig. 11 supports not only the intelligent image annotation task but also the intelligent text annotation task; in addition, the display interface further comprises the number of records in the intelligent labeling task and the number of pages corresponding to the specific data of the number of records.
The model market mainly comprises submodules of model deployment, version management, interface management, model application and the like, and aims to manage the trained model, release the trained model to the model market, provide API (application programming interface) service for the model released on the deep learning model management platform under the SaaS (software as a service) service condition and perform localized deployment under the localized condition.
The training center mainly comprises sub-modules of data preprocessing, model visualization, model evaluation, model release and the like, and aims to more rapidly confirm the effective state of model training in a visualization mode, so that the training process is standardized, and the efficiency of model training is improved; meanwhile, the model can be subjected to distributed training and training resource scheduling, so that idle computing resources are fully utilized, and the effective utilization rate of the resources is improved.
In a specific example, the training center can also realize functions of creating a project, creating a model, comparing the model, evaluating the model, training details, managing a training task and the like, wherein the creating of the model is to divide a training set and a verification set of a data set, a built-in model of a deep learning model management platform is configured according to the proportion of the training set to the verification set 9, a super parameter is set, the number of training servers is associated, and a subsequent model storage path is stored; the model comparison is performed by standardizing classification models such as VGG and RESNET, target detection models such as fasternn and YOLO, and model evaluation indexes and standards, and the core evaluation indexes include but are not limited to: accuracy, F1 score, precision, recall, FBeta, ROC, mAP, etc.; the task management module classifies training data such as trained, in training, to be trained, cancelled, automatically updated, deleted and the like. And the back end of the deep learning model management platform dynamically adjusts the GPU to be used according to the training task and the GPU use condition of the current server, such as the number of services for training and reasoning, and the like, so that GPU resources are fully utilized.
In one specific example, as shown in FIG. 12, the training center process is as follows:
firstly, associating a published data set with a training project, then creating a model, wherein the model creation comprises association of the data set, model selection, super-parameter setting and configuration of a training server, namely, the data set is divided into a training set and a verification set, conventionally configuring a built-in model of a deep learning model management platform according to the proportion of the training set to the verification set 9, setting super-parameters, associating the number of the training servers and storing subsequent model storage paths; then, entering training management, wherein the training management part comprises training data to be trained in the data set, classifying the trained, in-training, to-be-trained, training-cancelled and deleted data of the training data, evaluating a training model of the model after training management, and issuing the model after evaluation is passed.
In a specific example, as shown in fig. 13, a mouse clicks a training center of a main control bar, the interface is switched to a model training center interface, the interface includes specific situations of a new project, an existing project, a published data set, a general model, an industry model and a my model, and in a model overview part, specific situations of a trained, a specially trained, a cancelled, an automatically updated and a deleted model are also displayed respectively. The model training center interface also monitors the use rate of all, reasoning, training and idle models, and the use condition of the models at a certain moment can be detected in the model training center.
The reasoning center comprises submodules of off-line testing, on-line testing, gray level deployment, model monitoring and the like, and aims to test and online the production environment model, monitor the health degree of the online model and ensure the normal operation of the model.
In one possible implementation manner, the deep learning model management platform further includes:
the monitoring management unit is used for judging whether the inference accuracy rate of the deep learning model used by the user in use meets a second preset requirement: if not, inserting the line into the deep learning model;
the training unit is also used for further training the offline deep learning model according to the subsequently expanded training set.
In a specific example, as shown in fig. 14, online data is respectively sent to a deployment model a, a deployment model B, and manual determination, where after being sent to the deployment model a and the deployment model B, a model inference result is obtained by reasoning through the deployment model a and the deployment model B, for three determination modes (the deployment model a, the deployment model B, and the manual determination), model evaluation is performed, next determination meeting an online requirement is performed, if the requirements do not meet the online requirement, the model is offline, model optimization is performed, the model is fed back to the deployment model a and the deployment model B, output of the model inference result is continued until the requirements meet the online requirement, the model is online, an inference result is obtained, if the personnel are regularly taken out, whether the judgment meeting the online requirement needs to be performed again, and if the personnel are uploaded to the production system, a process of the inference center is completed.
According to the optional mode, whether the deep learning model in the deep learning model management system can be normally applied or not can be monitored, when the model is abnormally applied, the model is offline, and the offline deep learning model is continuously trained further according to the subsequently expanded training set so as to optimize the offline deep learning model.
In a possible implementation manner, the monitoring management unit is further configured to display a model inference interface in response to an operation on an inference center control in the main control bar, and enable the training unit to further train the corresponding trained deep learning model according to a subsequent extended training set when the labeling unit subsequently extends the training set in response to an operation on an automatic update control of a corresponding trained deep learning model entry of the model inference interface.
In a specific example, as shown in fig. 15, a mouse clicks on an inference center of a main control bar, and the interface is switched to an inference center interface, where the interface includes a model comparison portion, where the portion may include a plurality of inference tasks, fig. 15 illustrates inference task 1 and inference task 2, and each inference task includes a sequence number, a name, a version, an automatic update, a model path, a test path, a picture total amount, an inference amount, and an operation entry, where the automatic update may freely select whether to update, and when the automatic update is selected, the training unit may further train a corresponding trained deep learning model according to a subsequent extended training set when the labeling unit subsequently extends the training set.
According to the implementation mode, further training can be automatically started to optimize the deep learning model corresponding to the expanded training set.
In a possible implementation manner, the labeling unit is configured to obtain a plurality of training sets;
and the training unit is used for training at least one corresponding deep learning model according to each training set.
In this implementation, training sets in multiple domains may be obtained, and multiple deep learning models may be trained according to the training set corresponding to each domain.
In a possible implementation manner, the training unit is configured to train a plurality of deep learning models according to the training set.
In a specific example, the deep learning model management platform can be applied to multiple fields such as unmanned driving, medicine, face recognition and speech understanding translation, each field corresponds to one training set, and multiple fields correspond to multiple training sets, so that the labeling unit needs to label different training samples according to different fields, train the deep learning model, and infer image sample labels according to the deep learning model to obtain different training sets. In addition, since each field also needs to implement different functions, the deep learning model management platform can use the training sets corresponding to the field to train different deep learning models respectively to implement different functions.
The training system mainly comprises sub-modules of examination practice, question bank management, simulation examination, examination management and the like, and aims to help a user to be familiar with data and train the client to label the data.
In one possible implementation, the plurality of deep learning models belong to at least two deep learning frameworks.
In a specific example, the deep learning frames include TensorFlow, pythrch, caffe, paddlePaddle and the like, and the deep learning model trained by the training unit belongs to at least two deep learning frames so as to establish unified specifications for model conversion, performance and the like among different frames of the deep learning model management platform.
In a specific example, as shown in fig. 16, in the architecture diagram of the deep learning platform, both Data VIP and Back VIP represent virtual IPs, the Data VIP is a highly available communication architecture, the Web service is a front-end application of the deep learning model management platform, the front-end application needs to interface Data, the Data is associated with the front-end application on one hand and stored in a database on the other hand, and is connected to a Back-end application on the other hand, the front-end application uses the Data VIP, the database uses the Data VIP, and the Back-end application uses the Back VIP, wherein the Back-end application is biased towards a Back-end service of an application layer, a conventional general server is used, and can access a Back-end agent _ N through the virtual IP, the agent _ N is a background Docker application, the agent _ N calls a GPU server, the scheduling, management, and model training of the GPU are all allocated into the agent _ N, and the agent _ N has a unified calling service to interact with the Back-end application.
To sum up, the embodiment of the present invention constructs a full-lifecycle deep learning platform closed-loop system, and the work flow of the system is shown in fig. 17, for example, and includes: the method comprises the steps of providing deep learning platform data, importing the data into a platform, carrying out task allocation on the data through data management, labeling the data by a labeling person and an inspector, for example, labeling image data, judging whether manual labeling or intelligent labeling is used according to roles, then carrying out data verification, then rechecking the data to complete verification to obtain a data set, publishing the data set, training the published data set, checking project overview, creating a project, creating a model, submitting the data set to training, carrying out training task management to obtain a trained model, publishing the model, managing the published model by a model market, deploying the model, testing and uploading a production environment model by an inference center, inferring new data given by a user through an online model to obtain the data, and storing the data to the platform.
Another embodiment of the present invention provides a deep learning model management method, which is applied to a terminal device, and includes:
the method comprises the steps of responding to the operation of a data set management control in a main control column to display a data set management interface, responding to the operation of a labeling control of the data set management interface to display a labeling task distribution interface, responding to the operation of the labeling task distribution control of the labeling task distribution interface to issue a labeling task, and acquiring a training set, wherein the training set comprises a plurality of training samples labeled with labels;
training at least one deep learning model according to the training set;
and releasing the trained deep learning model for the user to use on the upper line.
The deep learning model management method provided by the embodiment of the invention obtains at least one deep learning model by training a plurality of training samples marked with labels, and releases the trained deep learning model for a user to use on line, so that a deep learning platform closed-loop system with a full life cycle can be constructed.
It should be noted that the deep learning model management method provided in this embodiment is similar to the principles and workflow of the deep learning model management platform, and relevant parts can refer to the above description, which is not repeated herein.
As shown in fig. 18, a computer system suitable for implementing the deep learning model management system provided by the above-described embodiments includes a central processing module (CPU) that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the computer system are also stored. The CPU, ROM, and RAM are connected thereto via a bus. An input/output (I/O) interface is also connected to the bus.
An input section including a keyboard, a mouse, and the like; an output section including a speaker and the like such as a Liquid Crystal Display (LCD); a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary.
In particular, the processes described in the above flowcharts may be implemented as computer software programs according to the present embodiment. For example, the present embodiments include a computer program product comprising a computer program tangibly embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium.
The flowchart and schematic diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to the present embodiments. In this regard, each block in the flowchart or schematic diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the schematic and/or flowchart illustration, and combinations of blocks in the schematic and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
On the other hand, the present embodiment also provides a nonvolatile computer storage medium, which may be the nonvolatile computer storage medium included in the apparatus in the foregoing embodiment, or may be a nonvolatile computer storage medium that exists separately and is not assembled into a terminal. The non-volatile computer storage medium stores one or more programs that, when executed by a device, cause the device to:
displaying a data set management interface by using a labeling unit in response to the operation on a data set management control in a main control column, displaying a labeling task distribution interface in response to the operation on a labeling control of the data set management interface, issuing a labeling task in response to the operation on the labeling task distribution control of the labeling task distribution interface, and acquiring a training set, wherein the training set comprises a plurality of training samples labeled with labels;
training at least one deep learning model according to the training set by utilizing a training unit;
and releasing the trained deep learning model by using a releasing unit, and using the model on line for a user.
In the description of the present invention, it should be noted that the terms "upper", "lower", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and operate, and thus, should not be construed as limiting the present invention. Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "coupled" are to be construed broadly and encompass, for example, both fixed and removable coupling as well as integral coupling; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art.
It is further noted that, in the description of the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention, and it will be obvious to those skilled in the art that other variations and modifications can be made on the basis of the above description, and all embodiments cannot be exhaustive, and all obvious variations and modifications belonging to the technical scheme of the present invention are within the protection scope of the present invention.

Claims (12)

1. A deep learning model management system, comprising:
the system comprises a marking unit, a data set management control unit and a data set distribution unit, wherein the marking unit is used for responding to the operation of a data set management control in a main control bar to display a data set management interface, responding to the operation of a marking control of the data set management interface to display a marking task distribution interface, responding to the operation of the marking task distribution control of the marking task distribution interface to issue a marking task and acquire a training set, and the training set comprises a plurality of training samples marked with labels;
the training unit is used for training at least one deep learning model according to the training set;
and the issuing unit is used for issuing the trained deep learning model for online use by a user.
2. The system of claim 1, wherein the annotation unit is further configured to display a verification setting interface in response to an operation of a verification control of the data set management interface, determine a verification parameter in response to an operation of the verification setting control of the verification setting interface, determine an annotation quality criterion in response to an operation of an annotation quality criterion setting control of the annotation task assignment interface, and issue a verification task in response to an operation of a verification task assignment control of the annotation task assignment interface.
3. The system of claim 2, wherein the verification parameters include a verification mode, a verification quantity or a verification proportion.
4. The system of claim 1, wherein the labeling task corresponds to a partial training set, and the labeling unit is configured to obtain the training set and includes:
displaying an intelligent labeling interface in response to an operation on a labeling tool control in a main control bar, and in response to an operation on an intelligent labeling control of the intelligent labeling interface: obtaining a sample set, wherein the sample set comprises part of training samples marked with labels and samples to be expanded except the training samples, and the part of training samples marked with the labels are from the part of training set; training a first deep learning model according to the training samples; inputting part of samples in the samples to be expanded into a first deep learning model obtained through training for reasoning so as to obtain labels of the part of samples in the samples to be expanded; judging whether the reasoning accuracy of the first deep learning model obtained by training meets a first preset requirement: if not, correcting labels of part of samples in the samples to be expanded, expanding training samples according to the corrected part of samples in the samples to be expanded, and switching to the first deep learning model trained according to the training samples; if so, inputting the rest samples in the samples to be expanded into the first deep learning model obtained by training for reasoning to obtain the labels of the rest samples in the samples to be expanded, so as to obtain the labels of all the samples in the sample set to obtain the training set.
5. The system of claim 1, further comprising:
the monitoring management unit is used for judging whether the inference accuracy rate of the deep learning model used by the user in use meets a second preset requirement: if not, the deep learning model is offline;
and the training unit is also used for further training the offline deep learning model according to the subsequently expanded training set.
6. The system according to claim 5, wherein the monitoring management unit is further configured to display a model inference interface in response to an operation on an inference center control in a master control bar, and to cause the training unit to further train the corresponding trained deep learning model according to a subsequent augmented training set when the labeling unit subsequently augments the training set in response to an operation on an automatic update control of a corresponding trained deep learning model entry of the model inference interface.
7. The system of claim 1,
the labeling unit is used for acquiring a plurality of training sets;
and the training unit is used for training at least one corresponding deep learning model according to each training set.
8. The system of claim 1, wherein the training unit is configured to train a plurality of deep learning models according to the training set.
9. The system of claim 8, wherein the plurality of deep learning models belong to at least two deep learning frameworks.
10. A deep learning model management method is applied to terminal equipment and is characterized by comprising the following steps:
the method comprises the steps of responding to the operation of a data set management control in a main control column to display a data set management interface, responding to the operation of a labeling control of the data set management interface to display a labeling task distribution interface, responding to the operation of the labeling task distribution control of the labeling task distribution interface to issue a labeling task, and acquiring a training set, wherein the training set comprises a plurality of training samples labeled with labels;
training at least one deep learning model according to the training set;
and releasing the trained deep learning model, and using the model on an upper line for a user.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to claim 10 when executing the program.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of claim 10.
CN202110849991.0A 2021-07-27 2021-07-27 Deep learning model management system, method, computer device and storage medium Pending CN115686280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110849991.0A CN115686280A (en) 2021-07-27 2021-07-27 Deep learning model management system, method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849991.0A CN115686280A (en) 2021-07-27 2021-07-27 Deep learning model management system, method, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN115686280A true CN115686280A (en) 2023-02-03

Family

ID=85058152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110849991.0A Pending CN115686280A (en) 2021-07-27 2021-07-27 Deep learning model management system, method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN115686280A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662764A (en) * 2023-07-28 2023-08-29 中国电子科技集团公司第十五研究所 Data identification method for error identification correction, model training method, device and equipment
CN117174261A (en) * 2023-11-03 2023-12-05 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662764A (en) * 2023-07-28 2023-08-29 中国电子科技集团公司第十五研究所 Data identification method for error identification correction, model training method, device and equipment
CN116662764B (en) * 2023-07-28 2023-09-29 中国电子科技集团公司第十五研究所 Data identification method for error identification correction, model training method, device and equipment
CN117174261A (en) * 2023-11-03 2023-12-05 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images
CN117174261B (en) * 2023-11-03 2024-03-01 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images

Similar Documents

Publication Publication Date Title
CN109241141B (en) Deep learning training data processing method and device
CN112966772A (en) Multi-person online image semi-automatic labeling method and system
CN115686280A (en) Deep learning model management system, method, computer device and storage medium
CN111383100A (en) Risk model-based full life cycle management and control method and device
CN110489749A (en) Intelligent Office-Automation System Work Flow Optimizing
CN112270533A (en) Data processing method and device, electronic equipment and storage medium
US20150379112A1 (en) Creating an on-line job function ontology
CN113704058B (en) Service model monitoring method and device and electronic equipment
CN110826306B (en) Data acquisition method and device, computer readable storage medium and electronic equipment
CN112241417B (en) Page data verification method and device, medium and electronic equipment
CN107832408B (en) Power grid defect recommendation method based on data labels and entropy weight method
CN115860877A (en) Product marketing method, device, equipment and medium
CN114185641B (en) Virtual machine cold migration method and device, electronic equipment and storage medium
CN115757075A (en) Task abnormity detection method and device, computer equipment and storage medium
CN115525192A (en) User-oriented quotation charging method and device, computer equipment and storage medium
CN115035044A (en) Be applied to intelligent AI platform of industry quality inspection
CN115169578A (en) AI model production method and system based on meta-space data markers
CN212112557U (en) Manufacturing management integrated information system
US11580876B2 (en) Methods and systems for automatic creation of in-application software guides based on machine learning and user tagging
US20140114730A1 (en) System and method for capability development in an organization
US20240104004A1 (en) Intelligent accessibility testing
CN112968941B (en) Data acquisition and man-machine collaborative annotation method based on edge calculation
EP4089592A1 (en) Method for determining annotation capability information, related apparatus and computer program product
CN113408633B (en) Method, apparatus, device and storage medium for outputting information
US20140170618A1 (en) System and Method for Facilitating Career Growth in an Organization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination