CN114912582A - Model production method, model production device, electronic device, and storage medium - Google Patents
Model production method, model production device, electronic device, and storage medium Download PDFInfo
- Publication number
- CN114912582A CN114912582A CN202210532860.4A CN202210532860A CN114912582A CN 114912582 A CN114912582 A CN 114912582A CN 202210532860 A CN202210532860 A CN 202210532860A CN 114912582 A CN114912582 A CN 114912582A
- Authority
- CN
- China
- Prior art keywords
- model
- data set
- data
- trained
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 97
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 85
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000012360 testing method Methods 0.000 claims abstract description 47
- 238000001514 detection method Methods 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 23
- 238000012795 verification Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 17
- 238000004140 cleaning Methods 0.000 claims description 13
- 238000002372 labelling Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000013473 artificial intelligence Methods 0.000 abstract description 7
- 238000004883 computer application Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000013523 data management Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013468 resource allocation Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- General Factory Administration (AREA)
Abstract
The present disclosure provides a model production method, system, apparatus, and storage medium. The method relates to the technical field of computer application, in particular to the artificial intelligence fields of model production, model management, model test and the like. The specific implementation scheme is as follows: receiving a model production request; determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request; and training a preset model by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model. According to the technical scheme, the use threshold of the model production line can be reduced, and the model generation efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of computer application technologies, and in particular, to the field of artificial intelligence such as model production, model management, and model testing, and in particular, to a method and an apparatus for model production, an electronic device, and a storage medium.
Background
With the development of artificial intelligence, artificial intelligence technology has been applied to various industries. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. In a practical sense, machine learning is a method that trains a model using data and then predicts using the model. The model production line can solidify the steps of model training and model deployment, and achieve the purposes of training a new model and deploying the model on line. The traditional model production line needs users to know more basic knowledge and write codes, and for users without programming capability or model parameters, the threshold is relatively high, and the efficiency of model generation is low.
Disclosure of Invention
The disclosure provides a model production method, a model production device, an electronic device and a storage medium.
According to a first aspect of the present disclosure, there is provided a model production method comprising:
receiving a model production request;
determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request;
and training a preset model by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
According to a second aspect of the present disclosure, there is provided a model production apparatus comprising:
the first receiving module is used for receiving a model production request;
the determining module is used for determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request;
and the generating module is used for training a preset model by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method provided by the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method provided by the first aspect described above.
According to the technical scheme, the use threshold of the model production line can be reduced, and the model generation efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram of a method of model production according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of data cleansing according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a target model-based detection process according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a model production architecture according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of the component structure of a model production apparatus according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a component architecture of a model production system according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a scenario of model generation according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of an electronic device for implementing a model production method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terms "first," "second," and "third," etc. in the description and claims of the present disclosure and the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. A method, system, article, or apparatus is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, system, article, or apparatus.
The embodiment of the present disclosure provides a model production method, which may be applied to an electronic device applied to a model production line, and specifically, the electronic device may be a component of the model production line, or may be independent of the model production line but may be connected to the model production line. The electronic device includes, but is not limited to, a fixed device including, but not limited to, a server, which may be a cloud server or a general server, and/or a mobile device. For example, mobile devices include, but are not limited to: one or more terminals in the mobile phone or the tablet personal computer. As shown in fig. 1, the model production method includes:
s101: receiving a model production request;
s102: determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request;
s103: and training a preset model by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
In embodiments of the present disclosure, the model production request may be a request entered by a user through a user interface. In practical application, the electronic device presents the plurality of data sets to a user through a user interface, so that the user can designate one data set from the plurality of data sets as a data set to be trained. Further, the model production request may further include first resource configuration information indicating information of resources required in the model training. The first resource configuration information includes configuration information of at least one of the following resources: a Central Processing Unit (CPU), a memory, and a Graphics Processing Unit (GPU). It is to be appreciated that in some embodiments, in the case that the model production request does not carry the first resource configuration information, the electronic device automatically determines the first resource configuration information for the model production request.
In the embodiment of the present disclosure, the preset model is an initial model used for training to obtain a target model. For example, the pre-set model may be a Regional Convolutional Neural Network (RCNN) model. For another example, the pre-set model may be a Full Convolutional Network (FCN) model. As another example, the preset model may be a model based on the Yolov3(You Only Look one Version3) algorithm.
In the disclosed embodiment, the target model is a model produced based on a model production request. For example, the target model may be an object detection model. As another example, the target model may be a text matching model. As another example, the object model may be an image classification model. The above is merely an exemplary illustration, and is not intended to limit all possible types of target models, but is not exhaustive.
In the embodiment of the present disclosure, after the target model is obtained, the target model is stored, so that the target model is subsequently used by the user.
According to the technical scheme of the embodiment of the disclosure, a data set to be trained and first resource configuration information are determined based on a received model production request; training a preset model by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model; therefore, a user does not need to care about code compiling and model parameter adjusting, only needs to input a model production request, and the system can automatically complete model generation, so that the use threshold of a model production line is reduced, and the model generation efficiency is improved.
In some embodiments, determining a set of data to be trained based on the model production request comprises: receiving a first data set uploaded through a user terminal; based on the model production request, a data set to be trained is determined from the first data set.
Here, the number of data in the data set to be trained is less than or equal to the number of data in the first data set. In the embodiment of the present disclosure, the number of the first data sets is not limited. When a plurality of first data sets are received by a user terminal, the plurality of first data sets are preprocessed, and one or more data sets which can be used as data sets to be trained for the user terminal are obtained.
Therefore, the data set to be trained is determined from the first data set uploaded by the user terminal based on the model production request, the produced target model can be associated with the first data set, and the matching degree of the target model is improved.
In some embodiments, determining a data set to be trained from the first data set comprises: preprocessing the first data set, and determining a data set to be trained from the preprocessed first data set; or, in the case that the first data set is preprocessed, determining a data set to be trained from the first data set.
In some embodiments, in response to detecting that a first data set uploaded by a user terminal needs to be preprocessed, all or part of data in the preprocessed first data set is combined into a data set to be trained from the preprocessed first data set. In some embodiments, in response to detecting that a first data set uploaded by a user terminal does not need to be preprocessed, all or part of the data in the first data set is formed into a data set to be trained.
In some embodiments, in response to detecting that a plurality of first data sets uploaded by the user terminal need to be preprocessed, a data set specified in the model production request is selected from the preprocessed plurality of first data sets, and all or part of data in the specified data set is combined into a data set to be trained. In some embodiments, in response to detecting that there is no need to preprocess the plurality of first data sets uploaded by the user terminal, a data set specified in the model-based production request is selected from the plurality of first data sets, and all or part of data in the specified data set is combined into a data set to be trained.
Therefore, the diversity of the data set to be trained can be improved, and the diversity of the target model is promoted.
In some embodiments, preprocessing the first data set includes performing a cleansing process on data in the first data set.
Here, the cleaning process includes, but is not limited to, performing a variety of basic cleaning operations on the image data set, such as automatic deblurring, de-approximation, rotation, mirroring, and the like. For example, an intelligent data service platform (EasyData) technology may be used to perform a cleansing process on the data in the first data set. The disclosed embodiments do not limit the techniques used for the cleaning process.
In some embodiments, preprocessing the first data set includes annotating data in the first data set.
Here, the annotation process includes, but is not limited to, manual annotation and automatic annotation. After a first data set imported by a user through a user interface is received, when the marking information input by the user through the user interface is detected, the marking information is stored, and the user can be supported to manually mark the data in the first data set. And in the preset time, the marking information input by the user through the user interface is not detected, the data in the first data set is intelligently marked, and the intelligent marking result is stored. The intelligent marking can use two modes of active learning and appointed model, can automatically screen and mark difficult cases, and finishes marking after manual confirmation and standard reaching.
Therefore, the accuracy of the finally stored data set is improved by preprocessing the received data set, accurate data can be provided for subsequent rapid model training completion, the efficiency of model generation is improved, and the accuracy of model generation is improved.
In some embodiments, before training the preset model with the data set to be trained, the method may further include: and removing wrong labeled data and unlabeled data in the data set to be trained.
In order to ensure the accuracy of the data in the data set to be trained, the interference data in the data set to be trained needs to be removed. Fig. 2 shows a schematic flow chart of data cleansing, and as shown in fig. 2, the flow chart comprises:
s201: receiving a cleaning starting instruction;
s202: detecting image information in a data set to be trained, and then executing S203;
s203: whether an image path exists or not, if so, executing S204; if not, executing S206;
s204: marking whether the information is correct, if so, executing S205; if not, executing S207;
s205: whether the number of the marking information is larger than 0 or not is judged, if yes, S209 is executed; if not, executing S208;
here, the annotation information equal to 0 means no annotation information.
S206: deleting the image information;
s207: deleting the labeling information;
s208: deleting the image;
s209: and reserving the image corresponding to the image information and the labeling information thereof.
Therefore, interference removal processing is performed before training, accurate data base can be provided for the generation of the subsequent target model, and the accuracy of the generated target model is improved.
In some embodiments, training a preset model with a data set to be trained based on a first resource indicated by the first resource configuration information to obtain a target model, includes: determining currently available resources; determining preset parameters for a preset model based on currently available resources; and training a preset model with preset parameters by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
Wherein the currently available resources include at least one of: the number of GPU resources currently available; the number of displays currently available.
Here, different preset models correspond to different preset parameters.
Here, the preset parameters include, but are not limited to: the number of iterative training times (which may be referred to as epochs), the number of training samples in each batch (batch) (which may be referred to as batch _ size), the number of batches required to complete an epoch once (which may be referred to as iterations), and the learning rate (which may be referred to as learning _ rate).
Wherein, an epoch refers to the process that all data are sent into the network to complete one forward calculation and backward propagation. Because the data volume is large, all data can not be sent to the preset model at one time, and therefore, the mode of sending the data to the preset model in batches is adopted, and during training, all data are not enough to be trained once, and fitting convergence can be achieved only by repeating the steps for many times.
Where a portion of the data is trained in the network at a time, and the batch _ size is the number of training samples in each batch.
Wherein, the iterations is the number of batchs needed to complete an epoch once.
Assuming there are 2000 data, divided into 4 lots, then lot _ size is 500. Training is performed by running all data to complete 1 epoch, requiring 4 iterations.
And respectively providing better preset parameters for different preset models. For example, when the preset model is RCNN, the preset parameters used include: and (3) epoch is 15, batch _ size is 2, lr is 0.00025 and the like.
In some embodiments, the flow of automatically adjusting the batch _ size is as follows:
traversing all GPUs, finding out the GPU with the minimum display memory, and recording the minimum display memory as MIN _ GPU _ MEM;
determining the memory consumed by each data according to the used preset model, and recording the memory as MEM _ PER _ BATCH; for example, fast RCNN may take 2800, YOLOV3 may take 1400;
finally, BATCH _ size ═ MAX ((MIN _ GPU _ MEM/MEM _ PER _ BATCH), 1).
In some embodiments, the process of automatically adjusting the learning _ rate is as follows:
recording the number of the GPUs as GPU _ NUM, wherein the BATCH _ SIZE uses the value obtained by the previous calculation;
determining BASE _ LR according to the used preset model; for example, fast RCNN may take 0.000125, YOLOV3 may take 0.000125;
finally, the learning _ rate is BASE _ LR _ BATCH _ SIZE GPU _ NUM.
Therefore, a user does not need to manually adjust parameters of the model, only needs to input the first data set and the model production request, and the system can automatically complete model generation, so that the use threshold of model production is reduced, and the model generation efficiency is improved.
In some embodiments, determining the preset parameters for the preset model includes: and determining preset parameters for the preset model according to the number of the currently available GPU resources and the number of the display memories.
For example, the batch _ size and the spare _ rate are automatically adjusted according to the GPU number and the video memory number to ensure the training effect and avoid video memory overflow.
Therefore, the preset parameters of the preset model can accord with the currently available resources, and the problem of training interruption or error caused by the fact that the preset parameter setting does not accord with the resource configuration is avoided.
In some embodiments, training a preset model with preset parameters by using a data set to be trained to obtain a target model, includes: acquiring a training subset and a verification subset from a data set to be trained; training a preset model with preset parameters by adopting a training subset to obtain a plurality of models in the training process; verifying the plurality of models respectively by adopting the verification subsets to obtain first accuracy rates respectively corresponding to the plurality of models; and determining the target model according to the model with the highest first accuracy in the plurality of models.
Here, obtaining the training subset and the verification subset from the data set to be trained includes: and acquiring a training subset and a verification subset from the data set to be trained according to a preset proportion.
Here, the preset ratio may be 8: 1. wherein, the training subset accounts for 80% of the data in the data set to be trained, the verification subset accounts for 10% of the data in the data set to be trained, and the other 10% can be used as the testing subset of the data set to be trained. It is understood that the preset ratio can be set or adjusted according to design requirements, such as model generation speed or model generation accuracy.
Here, the plurality of models are general terms, and the specific number may be different depending on the number of iterations. For the same model production line, the training iteration times are preset by the system, and the user does not need to set and adjust the training iteration times.
Here, the first accuracy is an accuracy obtained by verifying the target model using the verification subset.
For example, training a preset model by using a training subset, and obtaining N models in the iterative training process, wherein the N models are marked as N1, N2, … and Nn; verifying the N models respectively by adopting a verification subset to obtain a first accuracy rate r1 of the model N1, first accuracy rates r2 and … of the model N2 and a first accuracy rate rn of the model Nn; if r1 is greater than r2 is greater than … is greater than rn, the model N1 with the highest first accuracy can be tested by adopting a test subset, and a second accuracy r 1' of the model N1 is obtained; finally, model N1 is determined to be the target model.
Here, taking model N1 as an example, the first accuracy r1 ═ model N1 accurately identifies the number of samples of the verification subset/the total number of samples of the verification subset. The second accuracy r 1' model N1 accurately identifies the number of samples of the test subset/the total number of samples of the test subset. It can be seen that the value of the first accuracy can reflect the performance of the target model, and the value of the second accuracy can also reflect the performance of the target model.
Therefore, the optimal target model can be screened out for the data set to be trained, and the performance of the finally generated target model is improved.
In some embodiments, determining the target model from a first highest accuracy model of the plurality of models comprises: acquiring a test subset from a data set to be trained; testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy; and determining the target model according to the second accuracy.
Here, the second accuracy is an accuracy obtained by verifying the target model using the test subset.
In some embodiments, if the second accuracy meets a certain threshold, the model with the highest first accuracy is taken as the target model; and if the second accuracy rate does not meet a certain threshold value, acquiring the training subset, the verification subset and the test subset from the data set to be trained again until the second accuracy rate meets the certain threshold value, and taking the model with the highest first accuracy rate as the target model.
In some embodiments, determining the target model from a first highest accuracy model of the plurality of models comprises: acquiring a test subset from a data set to be trained; testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy; and outputting the second accuracy to the user terminal, and determining the target model based on the instruction information fed back by the user terminal aiming at the second accuracy.
In some embodiments, when receiving a display operation of a model list sent by a user terminal, an electronic device outputs performance indexes of each target model in the model list, including a first accuracy and a second accuracy, to prompt the strength of the performance of each target model, so that the user terminal is facilitated to select a target model meeting requirements from the model list.
Therefore, the generated target models can be conveniently searched subsequently, and the performance of each target model can be conveniently output and displayed subsequently.
In some embodiments, after the target model is obtained, the target model may be applied for detection.
As shown in fig. 3, the detection process includes:
s301: receiving a detection request, wherein the detection request comprises to-be-detected data and second resource configuration information;
s302: and detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target model to obtain a detection result of the data to be detected.
Here, the data to be measured includes, but is not limited to, an image to be measured, a text to be measured, and the like.
In some embodiments, S301 and S302 described above may be performed after S103.
In the disclosed embodiment, the detection request may be input by a user through a user interface.
In the embodiment of the present disclosure, the second resource configuration information is a resource used when the target model is executed. The second resource configuration information includes configuration information of at least one of the following resources: CPU, memory, GPU.
The second resource allocation information may be the same as the first resource allocation information or may be different from the first resource allocation information.
Therefore, the generated target model can be tested on line, and the efficiency of model testing is improved.
In some embodiments, in the case that a plurality of object models are obtained in S103, an object specification model specified from the plurality of object models may be further included in the detection request. For example, N target models are obtained in S103, and are denoted as N1, N2, …, and Nn, and if N1 is specified as a model of the inspection data in the inspection request, model N1 is specified as the target. Further, detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target model to obtain a detection result of the data to be detected, including: and detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target specified model to obtain a detection result of the data to be detected. For example, if the model N1 is designated as the target, a second resource is allocated to the model N1, so that the model N1 operates based on the second resource, and the data to be detected is detected.
Therefore, the generated target model can be tested on line, and the efficiency of model testing is improved.
FIG. 4 shows a schematic diagram of a model production architecture, which, as shown in FIG. 4, includes three major parts, dataset preparation, model training, and online testing.
Wherein the data set preparation comprises:
step 1: and (6) importing data. The user leads the image data set into the system, the system provides a data set management function, various and multiple groups of data sets can be managed, and various basic cleaning operations such as automatic deblurring, approximation, rotation, mirroring and the like can be carried out on the image data set by means of the Baidu easy data technology. If the data set imported by the user carries the label information, step 2 can be omitted.
Step 2: and (6) data annotation. The user can label the imported data, and manual labeling and automatic labeling can be supported by means of the Baidu easy data technology. The automatic labeling can use two modes of active learning and appointed model, can automatically screen and label difficult cases, and finishes labeling after manual confirmation and standard reaching.
The model training only needs a user to specify a complete data set with labeling information, and configure used resources (such as a CPU, a memory and a GPU), and the target model can be output after the whole training period.
Here, the model training includes:
step 1: and loading data, wherein the system displays all currently available data sets and basic information such as specific images, labels and the like in the data sets, and a user can select a group of data sets for the system to load.
Step 2: data cleaning, there may be partial error data or data without labeled information in the input image data, which interferes with normal training, so data cleaning is required before training.
And step 3: and data segmentation, namely segmenting the data set to be trained into a training subset, a verification subset and a test subset. Wherein the training subset accounts for 80%, and the validation subset and the test subset each account for 10%. The training subset is used for model training, the verification subset is used for iterative verification, and the testing subset is used for testing the effect of the model.
And 4, step 4: and (4) model training, namely training by using a preset model and preset parameters. For different preset models, better preset parameters are provided, for example, for fast RCNN, the following preset parameters are used (epoch ═ 15, batch _ size ═ 2, lr ═ 0.00025, etc.); meanwhile, the batch _ size and the spare _ rate are automatically adjusted according to the GPU number and the video memory number so as to ensure the training effect and avoid video memory overflow.
And 5: and model evaluation, namely predicting and evaluating a plurality of models generated in the iterative process by using the test subset, and counting the accuracy of the test subset and the accuracy of the verification subset.
Step 6: and (5) screening the model, screening out the optimal model according to the verification subset index, inputting the optimal model into a model management system, and recording the index.
After the training is finished, the user can start the prediction service by using the trained model and perform service test.
Here, the in-line test includes:
step 1: and selecting a model, and selecting the model to be tested in the model list by the user.
Step 2: and starting the prediction service, configuring information such as a CPU (central processing unit), a memory, a GPU (graphic processing unit) and the like required by the prediction service by a user, and setting automatic service release time to save resources.
And step 3: and in the online test, an interface required for calling the prediction service is packaged in advance by the system, the user can test by uploading data to be tested in the interface, and the system can display the prediction result, including information such as the marking position, the confidence coefficient and the like.
In addition, the user may initiate a prediction service on multiple models to compare the training effect between the models.
In addition, the model parameters can be exposed instead of being internally or automatically adjusted, so that a user can automatically adjust the parameters to optimize the model effect.
The model production architecture provided by the embodiment of the disclosure abstracts and encapsulates the full life cycle of model production, provides a complete target model production line which can be used after opening a box, and only needs to input a data set without concerning code compiling and model parameter adjusting for a user, so that the system can automatically complete the full flow of data cleaning, data segmentation, model training, model evaluation and model screening, and can manage and test the generated model on line. Through the form of a production line, the training of the target model can be completed in a very short time, and the data set and the produced model can be uniformly managed. In addition, the user does not need to have professional ability, and the use cost is greatly reduced.
It should be understood that the architecture diagram shown in fig. 4 is merely illustrative and not restrictive, and that various obvious changes and/or substitutions may be made by those skilled in the art based on the example of fig. 4, and still fall within the scope of the disclosure of the embodiments of the disclosure.
The embodiment of the disclosure provides a model production device, which is applied to a model production line. As shown in fig. 5, the pattern production apparatus may include: a first receiving module 501, configured to receive a model production request; a determining module 502, configured to determine a data set to be trained and first resource configuration information for the data set to be trained based on the model production request; the generating module 503 is configured to train the preset model by using the data set to be trained based on the first resource indicated by the first resource configuration information, so as to obtain the target model.
In some embodiments, the first receiving module 501 is further configured to receive a first data set uploaded by a user terminal; the determining module 502 is further configured to determine a data set to be trained from the first data set based on the model production request.
In some embodiments, the determining module 502 is further configured to: preprocessing the first data set, and determining a data set to be trained from the preprocessed first data set; or, in the case that the first data set is preprocessed, determining a data set to be trained from the first data set.
Wherein the pre-treatment comprises at least one of:
cleaning the data in the first data set;
and performing labeling processing on the data in the first data set.
In some embodiments, the generating module 503 includes: a first determining submodule for determining currently available resources; the second determining submodule is used for determining preset parameters for the preset model based on the currently available resources; and the generation submodule is used for training the preset model with the parameters being preset parameters by adopting a data set to be trained on the basis of the first resource indicated by the first resource configuration information to obtain the target model.
Wherein the currently available resources include: the number of GPU resources currently available and/or the number of displays currently available.
In some embodiments, the generation submodule is configured to obtain a training subset and a verification subset from a data set to be trained; training a preset model with preset parameters by adopting a training subset to obtain a plurality of models in the training process; verifying the plurality of models respectively by adopting the verification subsets to obtain first accuracy rates respectively corresponding to the plurality of models; and determining the target model according to the model with the highest first accuracy in the plurality of models.
In some embodiments, the generating sub-module is further configured to obtain a test subset from the data set to be trained; testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy; and determining the target model according to the second accuracy, or outputting the second accuracy to the user terminal, and determining the target model based on instruction information fed back by the user terminal aiming at the second accuracy.
In some embodiments, the model production apparatus may further include: a second receiving module 504 (not shown in the figure), configured to receive a detection request, where the detection request includes data to be detected and second resource configuration information; the test module 505 (not shown in the figure) is configured to perform a test on the data to be tested based on the second resource indicated by the second resource configuration information and the target model, so as to obtain a test result of the data to be tested.
In some embodiments, in a case that there are multiple target models, the detection request further includes a target specification model specified from the multiple target models, and the test module 505 (not shown in the figure) is further configured to detect the data to be detected based on the second resource indicated by the second resource configuration information and the target specification model, so as to obtain a detection result of the data to be detected.
It should be understood by those skilled in the art that the functions of the processing modules in the model production apparatus according to the embodiments of the present disclosure may be realized by analog circuits that implement the functions described in the embodiments of the present disclosure, or by running software that performs the functions described in the embodiments of the present disclosure on electronic devices, as described with reference to the foregoing description of the model production method.
The model production device of the embodiment of the disclosure can automatically complete model generation, thereby not only reducing the use threshold of a model production line, but also improving the efficiency of model generation.
The embodiment of the disclosure provides a model production system, which is applied to a model production line. As shown in fig. 6, the model production system may include: a user interface 601 for receiving a plurality of first data sets and for receiving model production requests; a data management module 602, configured to pre-process a plurality of first data sets to obtain a plurality of second data sets; a model training module 603, configured to determine, based on the model production request, a data set to be trained and first resource configuration information, where the data set to be trained is a data set in the second data set; training a preset model by adopting the data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model; the online test module 604 is configured to receive a detection request through the user interface, where the detection request includes data to be detected, an assigned target model, and second resource configuration information; and inputting the data to be detected into the specified target model based on the second resource indicated by the second resource configuration information to obtain the detection result of the data to be detected.
In some embodiments, the data management module 602 is configured to pre-process the plurality of first data sets in at least one of: cleaning the data in the plurality of first data sets; and performing labeling processing on the data in the plurality of first data sets.
In some embodiments, the data management module 602 is further configured to remove the erroneous labeled data and the label-free data in the data set to be trained before the model training module 603 trains the preset model with the data set to be trained.
In some embodiments, the model training module 603 is further configured to determine currently available resources; determining preset parameters for a preset model based on currently available resources; and training a preset model with preset parameters by adopting a data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
In some embodiments, the model training module 603 is specifically configured to determine preset parameters for a preset model according to the number of currently available GPU resources and the number of display memories.
In some embodiments, the model training module 603 is further configured to obtain a training subset and a verification subset from the data set to be trained; training a preset model with preset parameters by adopting a training subset to obtain a plurality of models in the training process; verifying the plurality of models respectively by adopting the verification subsets to obtain first accuracy rates respectively corresponding to the plurality of models; and determining the target model according to the model with the highest first accuracy in the plurality of models.
In some embodiments, the model training module 603 is further configured to obtain a test subset from the data set to be trained; testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy; and determining the target model according to the second accuracy, or outputting the second accuracy to the user terminal, and determining the target model based on instruction information fed back by the user terminal aiming at the second accuracy.
In some embodiments, the data management module 602 is further configured to store each target model and record a performance indicator of each target model, where the performance indicator includes a first accuracy and a second accuracy.
It should be understood by those skilled in the art that the functions of each processing module in the model production system according to the embodiments of the present disclosure may be understood by referring to the foregoing description of the model production method, and each processing module in the model production apparatus according to the embodiments of the present disclosure may be implemented by an analog circuit that implements the functions described in the embodiments of the present disclosure, or may be implemented by running software that implements the functions described in the embodiments of the present disclosure on an electronic device.
The model production system of the embodiment of the disclosure can automatically complete model generation, thereby not only reducing the use threshold of a model production line, but also improving the efficiency of model generation.
Fig. 7 shows a schematic view of a scenario for model production, and as can be seen from fig. 7, an electronic device such as a cloud server receives a plurality of first data sets imported from terminals; preprocessing a plurality of received first data sets to obtain a plurality of second data sets; determining a data set to be trained and first resource configuration information based on received model production requests from all terminals; and training the preset model by adopting the data set to be trained based on the first resource indicated by the first resource configuration information to obtain the target model. The electronic equipment receives a detection request sent by each terminal, wherein the detection request comprises data to be detected, a specified target model and second resource configuration information; and inputting the data to be detected into the specified target model based on the second resource indicated by the second resource configuration information to obtain the detection result of the data to be detected. Therefore, a user does not need to care about code compiling and model parameter adjusting, only needs to input the first data set, the system can automatically complete the full processes of data cleaning, data segmentation, model training, model evaluation and model screening, and the generated model can be managed and tested on line. For example, a user may pre-detect the target objects 1 in a batch of images, first import some data sets with labeling information into the model production system, automatically generate the target models by the model production system, and then detect the target objects 1 in the batch of images through the target models. For another example, a user pre-detects a target text in a batch of texts, first imports some data sets with label information into a model production system, the model production system automatically generates a target model, and then detects the target text in the batch of texts through the target model.
The number of the terminals and the electronic devices is not limited in the disclosure, and the practical application may include a plurality of terminals and a plurality of electronic devices.
It should be understood that the scene diagram shown in fig. 7 is only illustrative and not restrictive, and those skilled in the art may make various obvious changes and/or substitutions based on the example of fig. 7, and the obtained technical solution still belongs to the disclosure scope of the embodiments of the present disclosure.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the device 800 includes a computing unit 801 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read-Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An Input/Output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System On Chip (SOCs), load Programmable Logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable model production apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard Disk, a random access Memory, a Read-Only Memory, an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a Compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client and server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (23)
1. A method of model production comprising:
receiving a model production request;
determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request;
and training a preset model by adopting the data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
2. The method of claim 1, wherein determining a set of data to be trained based on the model production request comprises:
receiving a first data set uploaded through a user terminal;
determining the data set to be trained from the first data set based on the model production request.
3. The method of claim 2, wherein the determining the set of data to be trained from the first set of data comprises:
preprocessing the first data set, and determining the data set to be trained from the preprocessed first data set; or,
and determining the data set to be trained from the first data set under the condition that the first data set is preprocessed.
4. The method of claim 3, said pre-processing comprising at least one of:
cleaning the data in the first data set;
and performing labeling processing on the data in the first data set.
5. The method according to any one of claims 1 to 4, wherein the training a preset model with the data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model includes:
determining currently available resources;
determining preset parameters for the preset model based on currently available resources;
and training the preset model with the parameters of the preset parameters by adopting the data set to be trained based on the first resource indicated by the first resource configuration information to obtain the target model.
6. The method of claim 5, wherein the currently available resources comprise: the number of graphics processor GPU resources currently available and/or the number of displays currently available.
7. The method according to claim 5 or 6, wherein the training the preset model with the parameters of the preset parameters by using the data set to be trained to obtain the target model comprises:
acquiring a training subset and a verification subset from the data set to be trained;
training the preset model with the preset parameters by adopting the training subset to obtain a plurality of models in the training process;
verifying the plurality of models respectively by adopting the verification subsets to obtain first accuracy rates respectively corresponding to the plurality of models;
and determining the target model according to a first model with the highest accuracy in the plurality of models.
8. The method of claim 7, wherein said determining the target model from a first highest accuracy model of the plurality of models comprises:
acquiring a test subset from the data set to be trained;
testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy;
and determining the target model according to the second accuracy, or outputting the second accuracy to a user terminal, and determining the target model based on instruction information fed back by the user terminal aiming at the second accuracy.
9. The method according to any one of claims 1-8, further comprising:
receiving a detection request, wherein the detection request comprises to-be-detected data and second resource configuration information;
and detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target model to obtain a detection result of the data to be detected.
10. The method according to claim 9, wherein the target models are a plurality of target models, the detection request further includes a target designation model designated from the plurality of target models, and the detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target model to obtain a detection result of the data to be detected includes:
and detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target specified model to obtain a detection result of the data to be detected.
11. A pattern production apparatus comprising:
the first receiving module is used for receiving a model production request;
the determining module is used for determining a data set to be trained and first resource configuration information aiming at the data set to be trained based on the model production request;
and the generating module is used for training a preset model by adopting the data set to be trained based on the first resource indicated by the first resource configuration information to obtain a target model.
12. The apparatus of claim 11, wherein,
the first receiving module is further configured to receive a first data set uploaded by a user terminal;
the determining module is further configured to determine the data set to be trained from the first data set based on the model production request.
13. The apparatus of claim 12, wherein the means for determining is further configured to:
preprocessing the first data set, and determining the data set to be trained from the preprocessed first data set; or,
and determining the data set to be trained from the first data set under the condition that the first data set is preprocessed.
14. The apparatus of claim 13, the pre-processing comprising at least one of:
cleaning the data in the first data set;
and performing labeling processing on the data in the first data set.
15. The apparatus of any of claims 11 to 14, wherein the generating means comprises:
a first determining submodule for determining currently available resources;
the second determining submodule is used for determining preset parameters for the preset model based on the currently available resources;
and the generation submodule is used for training the preset model with the parameters of the preset parameters by adopting the data set to be trained on the basis of the first resource indicated by the first resource configuration information to obtain the target model.
16. The apparatus of claim 15, wherein the currently available resources comprise: the number of graphics processor GPU resources currently available and/or the number of displays currently available.
17. The apparatus of claim 15 or 16, wherein the generation submodule is to:
acquiring a training subset and a verification subset from the data set to be trained;
training the preset model with the preset parameters by adopting the training subset to obtain a plurality of models in the training process;
verifying the plurality of models respectively by adopting the verification subsets to obtain first accuracy rates respectively corresponding to the plurality of models;
and determining the target model according to a first model with the highest accuracy in the plurality of models.
18. The apparatus of claim 17, wherein the generation submodule is further configured to:
acquiring a test subset from the data set to be trained;
testing the model with the highest first accuracy in the plurality of models by adopting the test subset to obtain a second accuracy;
and determining the target model according to the second accuracy, or outputting the second accuracy to a user terminal, and determining the target model based on instruction information fed back by the user terminal aiming at the second accuracy.
19. The apparatus of any of claims 11-18, further comprising:
the second receiving module is used for receiving a detection request, wherein the detection request comprises to-be-detected data and second resource configuration information;
and the test module is used for detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target model to obtain a detection result of the data to be detected.
20. The apparatus of claim 19, the object model being a plurality of object models, the detection request further including an object designation model designated from the plurality of object models, the testing module further configured to:
and detecting the data to be detected based on the second resource indicated by the second resource configuration information and the target specified model to obtain a detection result of the data to be detected.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532860.4A CN114912582A (en) | 2022-05-11 | 2022-05-11 | Model production method, model production device, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532860.4A CN114912582A (en) | 2022-05-11 | 2022-05-11 | Model production method, model production device, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114912582A true CN114912582A (en) | 2022-08-16 |
Family
ID=82769192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210532860.4A Pending CN114912582A (en) | 2022-05-11 | 2022-05-11 | Model production method, model production device, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114912582A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024153013A1 (en) * | 2023-01-16 | 2024-07-25 | 维沃移动通信有限公司 | Information transmission method and apparatus and communication device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319938A (en) * | 2017-12-31 | 2018-07-24 | 奥瞳系统科技有限公司 | High quality training data preparation system for high-performance face identification system |
CN108875821A (en) * | 2018-06-08 | 2018-11-23 | Oppo广东移动通信有限公司 | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing |
CN111160380A (en) * | 2018-11-07 | 2020-05-15 | 华为技术有限公司 | Method for generating video analysis model and video analysis system |
CN112799850A (en) * | 2021-02-26 | 2021-05-14 | 重庆度小满优扬科技有限公司 | Model training method, model prediction method, and model control system |
CN113706099A (en) * | 2021-08-23 | 2021-11-26 | 中国电子科技集团公司第二十八研究所 | Data labeling and deep learning model training and service publishing system |
-
2022
- 2022-05-11 CN CN202210532860.4A patent/CN114912582A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319938A (en) * | 2017-12-31 | 2018-07-24 | 奥瞳系统科技有限公司 | High quality training data preparation system for high-performance face identification system |
CN108875821A (en) * | 2018-06-08 | 2018-11-23 | Oppo广东移动通信有限公司 | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing |
CN111160380A (en) * | 2018-11-07 | 2020-05-15 | 华为技术有限公司 | Method for generating video analysis model and video analysis system |
CN112799850A (en) * | 2021-02-26 | 2021-05-14 | 重庆度小满优扬科技有限公司 | Model training method, model prediction method, and model control system |
CN113706099A (en) * | 2021-08-23 | 2021-11-26 | 中国电子科技集团公司第二十八研究所 | Data labeling and deep learning model training and service publishing system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024153013A1 (en) * | 2023-01-16 | 2024-07-25 | 维沃移动通信有限公司 | Information transmission method and apparatus and communication device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113220573B (en) | Test method and device for micro-service architecture and electronic equipment | |
CN110879776A (en) | Test case generation method and device | |
CN114564374A (en) | Operator performance evaluation method and device, electronic equipment and storage medium | |
CN115113528B (en) | Operation control method, device, equipment and medium of neural network model | |
CN114445047A (en) | Workflow generation method and device, electronic equipment and storage medium | |
CN112632179A (en) | Model construction method and device, storage medium and equipment | |
CN114997329A (en) | Method, apparatus, device, medium and product for generating a model | |
CN114360027A (en) | Training method and device for feature extraction network and electronic equipment | |
CN113505895B (en) | Machine learning engine service system, model training method and configuration method | |
CN114912582A (en) | Model production method, model production device, electronic device, and storage medium | |
CN113205189B (en) | Method for training prediction model, prediction method and device | |
CN118193389A (en) | Test case generation method, device, equipment, storage medium and product | |
CN117724980A (en) | Method and device for testing software framework performance, electronic equipment and storage medium | |
CN116795615A (en) | Chip evaluation method, system, electronic equipment and storage medium | |
CN115115062B (en) | Machine learning model building method, related device and computer program product | |
CN114116688B (en) | Data processing and quality inspection method and device and readable storage medium | |
CN113590484B (en) | Algorithm model service testing method, system, equipment and storage medium | |
CN114706610A (en) | Business flow chart generation method, device, equipment and storage medium | |
CN114756211A (en) | Model training method and device, electronic equipment and storage medium | |
CN113807391A (en) | Task model training method and device, electronic equipment and storage medium | |
CN114520773A (en) | Service request response method, device, server and storage medium | |
CN113590488B (en) | System test method and test platform for simulating financial data support | |
CN112148285B (en) | Interface design method and device, electronic equipment and storage medium | |
EP4089592A1 (en) | Method for determining annotation capability information, related apparatus and computer program product | |
CN116991737A (en) | Software testing method, system, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |