CN111652327A - Model iteration method, system and computer equipment - Google Patents

Model iteration method, system and computer equipment Download PDF

Info

Publication number
CN111652327A
CN111652327A CN202010686152.7A CN202010686152A CN111652327A CN 111652327 A CN111652327 A CN 111652327A CN 202010686152 A CN202010686152 A CN 202010686152A CN 111652327 A CN111652327 A CN 111652327A
Authority
CN
China
Prior art keywords
model
data set
trained
data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010686152.7A
Other languages
Chinese (zh)
Inventor
李鹏
汪明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Seektruth Data Technology Service Co ltd
Original Assignee
Beijing Seektruth Data Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Seektruth Data Technology Service Co ltd filed Critical Beijing Seektruth Data Technology Service Co ltd
Priority to CN202010686152.7A priority Critical patent/CN111652327A/en
Publication of CN111652327A publication Critical patent/CN111652327A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of deep learning, in particular to a model iteration method, a model iteration system and computer equipment. Acquiring test data which causes the derivation result of the model to have deviation, and forming a new data set to be trained; performing data distribution evaluation on a data set to be trained to obtain the distribution characteristics of the data set to be trained; fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set; and performing iterative training on the model by using the fused training data set to obtain an iterated model. The invention can automatically complete the iteration of the model without training a new model by professional personnel, thereby effectively improving the efficiency of the model iteration.

Description

Model iteration method, system and computer equipment
Technical Field
The invention relates to the technical field of deep learning, in particular to a model iteration method, a model iteration system and computer equipment.
Background
Neural Networks (NN) are complex network systems formed by a large number of simple processing units (called neurons) widely interconnected, reflect many basic features of human brain functions, and are highly complex nonlinear dynamical learning systems. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously. The neural network model is described based on a mathematical model of a neuron, and simply, it is a mathematical model. The neural network model is represented by a network topology, node characteristics, and learning rules. When the application scenario of the neural network model changes, corresponding iteration of the neural network model is required to adapt to the change.
The iteration of the current neural network model is mainly completed by training a new model according to a designed network structure after technicians process and label data. The method can not solve the problem that the reasoning result is inaccurate when the data distribution changes, and when a model meeting the requirement needs to be replaced, new data needs to be collected again to be fused with original data, and the model is preprocessed and retrained by professional algorithm engineers. The whole process cannot be automated, the training consumes long time and depends on professionals.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a model iteration method, a model iteration system and computer equipment, which can automatically complete the iteration of a model without training a new model by professional personnel during application, and effectively improve the efficiency of model iteration.
In a first aspect, the present invention provides a model iteration method, including:
obtaining test data which enable the model derivation result to have deviation, and forming a new data set to be trained;
performing data distribution evaluation on a data set to be trained to obtain the distribution characteristics of the data set to be trained;
fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set;
and performing iterative training on the model by using the fused training data set to obtain an iterated model.
Based on the content of the invention, the test data inconsistent with the original training data set distribution of the model can be obtained according to the model derivation result to form a new data set to be trained, the new data set to be trained and the original training data set of the model are correspondingly fused to obtain the fused training data set to carry out iterative training on the model, the fused training data set has the data distribution characteristics under most scenes, a more reasonably distributed data set can be provided for deep learning training, the generalization capability of the iterated model can be guaranteed, and the iterated model is suitable for scenes with rapidly increased business data volume and obviously changed data distribution. The model iteration method replaces the original mode of manually training a new model to complete iteration, can effectively improve the efficiency of model iteration, and does not need to depend on professionals excessively to complete the iteration.
In one possible design, the obtaining test data that deviates the model derivation results to form a new data set to be trained includes:
acquiring test data, substituting the test data into the model, and deducing to obtain a deduction result;
and judging whether the derivation result has deviation, if so, classifying and labeling the test data according to the manual labeling instruction, and then compiling the test data into a data set to be trained.
Based on the above invention, by substituting the corresponding test data into the model for derivation, the test data with deviation in the derivation result can be collected according to the model derivation result, and then the corresponding labeling processing is performed on the test data to obtain the data set to be trained, wherein the test data can be the test data which is prepared in advance according to the application scene and corresponds to the corresponding result.
In one possible design, the distribution characteristics of the data set to be trained include a class distribution, a class number distribution, a picture size distribution, a front-back background ratio distribution and a data source distribution of the data.
Based on the invention content, the distribution characteristics of the data set to be trained can be evaluated more reasonably from multiple dimensions.
In one possible design, the process of fusing the data set to be trained with the original training data set of the model includes:
when the data with the same distribution characteristics in the data set to be trained and the original training data set are judged, determining the weight proportion of the data to be trained and the original training data with the same distribution characteristics;
and fusing the data to be trained and the original training data according to the weight proportion of the data to be trained and the original training data.
Based on the invention, when the data set to be trained and the original training data set of the model are fused, the weight proportion of the data to be trained with the same distribution characteristics and the original training data is given, and then the data fusion is carried out according to the weight proportion, so that the rapid fusion is realized.
In one possible design, the process of fusing the data set to be trained with the original training data set of the model further includes:
when the data to be trained with the independent distribution characteristics in the data set to be trained is judged, the data to be trained is subjected to data enhancement and then is compiled into the fused training data set.
Based on the invention content, the data to be trained with independent distribution characteristics can be subjected to data enhancement, and then the data to be trained are independently programmed into the fused training data set, so that the data distribution characteristics of the fused training data set are improved, a more reasonably distributed data set is provided for model iterative training, and the iterated model has better generalization capability.
In one possible design, when the model is iteratively trained, the model is iteratively trained by adopting a transfer learning method, and the model is converged by adopting a loss function.
Based on the contents of the invention, the iterative training of the model can be completed quickly in a short time by adopting a moving learning method, so that the model learns more characteristics, the convergence of the model is realized by adopting a loss function, the overfitting problem of the iterative training of the model is prevented, and the data distribution of the model is more scientific and reasonable.
In a second aspect, the present invention provides a model iteration system, comprising:
the acquisition unit is used for acquiring test data which causes the model derivation result to have deviation, and forming a new data set to be trained;
the evaluation unit is used for carrying out data distribution evaluation on the data set to be trained to obtain the distribution characteristics of the data set to be trained;
the fusion unit is used for fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set;
and the training unit is used for carrying out iterative training on the model according to the fused training data set to obtain an iterated model.
In one possible design, when the training unit is configured to perform iterative training on the model according to the fused training data set, the training unit is specifically configured to: and (4) performing iterative training on the model by adopting a transfer learning method, and converging the model by adopting a loss function.
In a third aspect, the present invention provides a computer apparatus comprising:
a memory to store instructions;
a processor configured to read the instructions stored in the memory and execute the method of any of the first aspects according to the instructions.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects described above.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the system of the present invention;
FIG. 3 is a schematic diagram of a computer device according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It should be understood that the terms first, second, etc. are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time, and the term "/and" is used herein to describe another association object relationship, which means that two relationships may exist, for example, A/and B, may mean: a alone, and both a and B alone, and further, the character "/" in this document generally means that the former and latter associated objects are in an "or" relationship.
It is to be understood that in the description of the present invention, the terms "upper", "vertical", "inside", "outside", and the like, refer to an orientation or positional relationship that is conventionally used for placing the product of the present invention, or that is conventionally understood by those skilled in the art, and are used merely for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore should not be considered as limiting the present invention.
It will be understood that when an element is referred to as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly adjacent" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In the following description, specific details are provided to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example 1:
the embodiment provides a model iteration method, as shown in fig. 1, including the following steps:
s101, obtaining test data which enable the model derivation result to have deviation, and forming a new data set to be trained.
The step S101 is a process of obtaining a data set to be trained, and the process may specifically include:
acquiring test data, substituting the test data into the model, and deducing to obtain a deduction result;
and judging whether the derivation result has deviation, if so, classifying and labeling the test data according to the manual labeling instruction, and then compiling the test data into a data set to be trained.
By substituting the corresponding test data into the model for derivation, test data with deviation of the derivation result can be collected according to the model derivation result, and then the test data is correspondingly labeled to obtain a data set to be trained, wherein the test data can be test data which is prepared in advance according to an application scene and corresponds to the corresponding result.
The classification and labeling of the test data comprises true value labeling and label labeling, and if the test data is sliced, the position of a tangent point is also required to be labeled.
And S102, carrying out data distribution evaluation on the data set to be trained to obtain the distribution characteristics of the data set to be trained.
The distribution characteristics of the data set to be trained comprise data category distribution, category quantity distribution, picture size distribution, front-back background proportion distribution, data source distribution and the like. Through the distribution characteristics, the distribution characteristics of the data set to be trained can be more reasonably evaluated from multiple dimensions.
S103, fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set.
The fusion process comprises the following steps: when the data with the same distribution characteristics in the data set to be trained and the original training data set are judged, determining the weight proportion of the data to be trained and the original training data with the same distribution characteristics; and fusing the data to be trained and the original training data according to the weight proportion of the data to be trained and the original training data. When the data set to be trained and the original training data set of the model are fused, the weight proportion of the data to be trained and the original training data with the same distribution characteristics is given, and then data fusion is carried out according to the weight proportion, so that rapid fusion is realized.
The process of fusing further comprises: when the data to be trained with the independent distribution characteristics in the data set to be trained is judged, the data to be trained is subjected to data enhancement and then is compiled into the fused training data set. Data to be trained with independent distribution characteristics are enhanced and then are independently programmed into the fused training data set, so that the data distribution characteristics of the fused training data set are improved, a more reasonably distributed data set is provided for model iterative training, and the iterated model has better generalization capability.
And S104, performing iterative training on the model by using the fused training data set to obtain an iterated model.
When the model is subjected to iterative training, the model is subjected to iterative training by adopting a transfer learning method, and the model is converged by adopting a loss function. The original ability inheritance of the model is completed by adopting a moving learning method, the relation between a corresponding freezing network layer and a movable layer in the model is dynamically evaluated, the iterative training of the model can be completed quickly in a short time, so that the model learns more characteristics, the convergence of the model is realized by adding a corresponding adaptive loss function, the overfitting problem of the iterative training of the model is prevented, the mode is biased to an original data reasoning result, and the iterative training process of the model is more stable and efficient. The whole training process can promote the time process of deep learning training, and has obvious advantages for scenes with high alternation speed, so that the data distribution of the model is more scientific and reasonable.
Example 2:
the present embodiment provides a model iteration system, as shown in fig. 2, including:
the acquisition unit is used for acquiring test data which causes the model derivation result to have deviation, and forming a new data set to be trained;
the evaluation unit is used for carrying out data distribution evaluation on the data set to be trained to obtain the distribution characteristics of the data set to be trained;
the fusion unit is used for fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set;
and the training unit is used for carrying out iterative training on the model according to the fused training data set to obtain an iterated model.
In one possible design, when the training unit is configured to perform iterative training on the model according to the fused training data set, the training unit is specifically configured to: and (4) performing iterative training on the model by adopting a transfer learning method, and converging the model by adopting a loss function.
In one possible design, when the fusion unit is used to fuse the data set to be trained with the original training data set of the model, the fusion unit is specifically configured to:
when the data with the same distribution characteristics in the data set to be trained and the original training data set are judged, determining the weight proportion of the data to be trained and the original training data with the same distribution characteristics;
and fusing the data to be trained and the original training data according to the weight proportion of the data to be trained and the original training data.
In one possible design, when the fusion unit is configured to fuse the data set to be trained with the original training data set of the model, the fusion unit is further specifically configured to: when the data to be trained with the independent distribution characteristics in the data set to be trained is judged, the data to be trained is subjected to data enhancement and then is compiled into the fused training data set.
Example 3:
the present embodiment provides a computer device, as shown in fig. 3, including:
a memory to store instructions;
and the processor is used for reading the instructions stored in the memory and executing the model iteration method in the embodiment 1 according to the instructions.
The processor can adopt but is not limited to a microprocessor with the model number STM32F105 series; the Memory may include, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash Memory), a First In First Out (FIFO), a First In Last Out (FILO), and/or the like.
Example 4:
the present embodiment provides a computer-readable storage medium having stored thereon instructions that, when executed on a computer, cause the computer to perform the model iteration method of embodiment 1. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
Example 5:
the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the model iteration method of embodiment 1. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device to perform the methods described in the embodiments or some portions of the embodiments.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A model iteration method, comprising:
obtaining test data which enable the model derivation result to have deviation, and forming a new data set to be trained;
performing data distribution evaluation on a data set to be trained to obtain the distribution characteristics of the data set to be trained;
fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set;
and performing iterative training on the model by using the fused training data set to obtain an iterated model.
2. The model iteration method of claim 1, wherein the obtaining of the test data that deviates the model derivation results to form a new data set to be trained comprises:
acquiring test data, substituting the test data into the model, and deducing to obtain a deduction result;
and judging whether the derivation result has deviation, if so, classifying and labeling the test data according to the manual labeling instruction, and then compiling the test data into a data set to be trained.
3. The model iteration method of claim 1, wherein the distribution characteristics of the data set to be trained comprise a class distribution, a class number distribution, a picture size distribution, a front-to-back background ratio distribution and a data source distribution of data.
4. The model iteration method of claim 1, wherein the process of fusing the data set to be trained with the original training data set of the model comprises:
when the data with the same distribution characteristics in the data set to be trained and the original training data set are judged, determining the weight proportion of the data with the same distribution characteristics to be trained and the weight proportion of the original training data;
and fusing the data to be trained and the original training data according to the weight proportion of the data to be trained and the original training data.
5. The model iteration method of claim 1, wherein the process of fusing the data set to be trained with the original training data set of the model further comprises:
when the data to be trained with the independent distribution characteristics in the data set to be trained is judged, the data to be trained is subjected to data enhancement and then is compiled into the fused training data set.
6. The model iteration method of claim 1, wherein in the iterative training of the model, the model is iteratively trained by adopting a transfer learning method, and the model is converged by adopting a loss function.
7. A model iteration system according to claim 1, comprising:
the acquisition unit is used for acquiring test data which causes the model derivation result to have deviation, and forming a new data set to be trained;
the evaluation unit is used for carrying out data distribution evaluation on the data set to be trained to obtain the distribution characteristics of the data set to be trained;
the fusion unit is used for fusing the data set to be trained and the original training data set of the model according to the distribution characteristics of the data set to be trained to obtain a fused training data set;
and the training unit is used for carrying out iterative training on the model according to the fused training data set to obtain an iterated model.
8. The model iteration system of claim 7, wherein the training unit, when performing iterative training on the model according to the fused training data set, is specifically configured to: and (4) performing iterative training on the model by adopting a transfer learning method, and converging the model by adopting a loss function.
9. A computer device, comprising:
a memory to store instructions;
a processor for reading the instructions stored in the memory and executing the method according to the instructions as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-6.
CN202010686152.7A 2020-07-16 2020-07-16 Model iteration method, system and computer equipment Pending CN111652327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010686152.7A CN111652327A (en) 2020-07-16 2020-07-16 Model iteration method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010686152.7A CN111652327A (en) 2020-07-16 2020-07-16 Model iteration method, system and computer equipment

Publications (1)

Publication Number Publication Date
CN111652327A true CN111652327A (en) 2020-09-11

Family

ID=72349047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010686152.7A Pending CN111652327A (en) 2020-07-16 2020-07-16 Model iteration method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN111652327A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239732A (en) * 2021-04-13 2021-08-10 联合汽车电子有限公司 Engine knock intensity calculation method, system and readable storage medium
CN115151182A (en) * 2020-10-10 2022-10-04 豪夫迈·罗氏有限公司 Method and system for diagnostic analysis
CN116402354A (en) * 2023-06-07 2023-07-07 北京京东乾石科技有限公司 Evaluation parameter determining method and device, medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805258A (en) * 2018-05-23 2018-11-13 北京图森未来科技有限公司 A kind of neural network training method and its device, computer server
US10339468B1 (en) * 2014-10-28 2019-07-02 Groupon, Inc. Curating training data for incremental re-training of a predictive model
CN110287324A (en) * 2019-06-27 2019-09-27 成都冰鉴信息科技有限公司 A kind of data dynamic label placement method and device for coarseness text classification
CN110852446A (en) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 Machine learning model training method, device and computer readable storage medium
CN110909889A (en) * 2019-11-29 2020-03-24 北京迈格威科技有限公司 Training set generation and model training method and device based on feature distribution
CN111126607A (en) * 2020-04-01 2020-05-08 阿尔法云计算(深圳)有限公司 Data processing method, device and system for model training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339468B1 (en) * 2014-10-28 2019-07-02 Groupon, Inc. Curating training data for incremental re-training of a predictive model
CN108805258A (en) * 2018-05-23 2018-11-13 北京图森未来科技有限公司 A kind of neural network training method and its device, computer server
CN110287324A (en) * 2019-06-27 2019-09-27 成都冰鉴信息科技有限公司 A kind of data dynamic label placement method and device for coarseness text classification
CN110852446A (en) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 Machine learning model training method, device and computer readable storage medium
CN110909889A (en) * 2019-11-29 2020-03-24 北京迈格威科技有限公司 Training set generation and model training method and device based on feature distribution
CN111126607A (en) * 2020-04-01 2020-05-08 阿尔法云计算(深圳)有限公司 Data processing method, device and system for model training

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115151182A (en) * 2020-10-10 2022-10-04 豪夫迈·罗氏有限公司 Method and system for diagnostic analysis
CN115151182B (en) * 2020-10-10 2023-11-14 豪夫迈·罗氏有限公司 Method and system for diagnostic analysis
CN113239732A (en) * 2021-04-13 2021-08-10 联合汽车电子有限公司 Engine knock intensity calculation method, system and readable storage medium
CN113239732B (en) * 2021-04-13 2024-04-19 联合汽车电子有限公司 Engine knock intensity calculation method, system and readable storage medium
CN116402354A (en) * 2023-06-07 2023-07-07 北京京东乾石科技有限公司 Evaluation parameter determining method and device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111652327A (en) Model iteration method, system and computer equipment
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN110874590A (en) Training and visible light infrared visual tracking method based on adapter mutual learning model
CN111080304A (en) Credible relationship identification method, device and equipment
CN108334805A (en) The method and apparatus for detecting file reading sequences
CN113673482B (en) Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution
WO2023040147A1 (en) Neural network training method and apparatus, and storage medium and computer program
CN114693942A (en) Multimode fault understanding and auxiliary labeling method for intelligent operation and maintenance of instruments and meters
CN113254649B (en) Training method of sensitive content recognition model, text recognition method and related device
Wilde et al. Evolutionary dataset optimisation: learning algorithm quality through evolution
CN114359619A (en) Incremental learning-based power grid defect detection method, device, equipment and medium
Wang et al. Graph Agent: Explicit Reasoning Agent for Graphs
CN115858725B (en) Text noise screening method and system based on unsupervised graph neural network
CN109767457A (en) Online multi-instance learning method for tracking target, terminal device and storage medium
CN113449775B (en) Multi-label image classification method and system based on class activation mapping mechanism
CN113239272B (en) Intention prediction method and intention prediction device of network management and control system
CN117523218A (en) Label generation, training of image classification model and image classification method and device
CN114022458A (en) Skeleton detection method and device, electronic equipment and computer readable storage medium
CN112445939A (en) Social network group discovery system, method and storage medium
CN109870163A (en) It is a kind of that drawing system is built based on topological map multi-model
Zhang et al. Reinforcement learning cropping method based on comprehensive feature and aesthetics assessment
CN114339859B (en) Method and device for identifying WiFi potential users of full-house wireless network and electronic equipment
Koulali et al. Evaluation of Several Artificial Intelligence and Machine Learning Algorithms for Image Classification on Small Datasets
Wang et al. Neuron-level selective context aggregation for scene segmentation
WO2024040941A1 (en) Neural architecture search method and device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination