CN112052915A - Data training method, device, equipment and storage medium - Google Patents
Data training method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112052915A CN112052915A CN202011055438.1A CN202011055438A CN112052915A CN 112052915 A CN112052915 A CN 112052915A CN 202011055438 A CN202011055438 A CN 202011055438A CN 112052915 A CN112052915 A CN 112052915A
- Authority
- CN
- China
- Prior art keywords
- sample data
- data
- negative sample
- positive
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 6
- 238000005215 recombination Methods 0.000 abstract 1
- 230000006798 recombination Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000003321 amplification Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Electrically Operated Instructional Devices (AREA)
- Image Analysis (AREA)
Abstract
According to the data training method, the data training device, the data training equipment and the storage medium, sample data in an original training data set are obtained; preprocessing to obtain positive and negative sample data; traversing all column characteristics contained in positive and negative sample data respectively; randomly disorganizing each column characteristic aiming at all column characteristics contained in positive and negative sample data respectively, and recombining to obtain new positive and negative sample data; adding the training data set to the original training data set to obtain a new training data set; and model training is performed with it. In the application, random disordering and recombination are carried out through the features in each sample data, so that N features are independent of each other, and each feature is subjected to normal distribution.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data training method, apparatus, device, and storage medium.
Background
At present, in the process of model training by using training sample data, if the scale of the adopted training sample data is small, an overfitting phenomenon easily occurs, namely, training sample data is excessively depended on in the process of model training, so that the accuracy of a result of model prediction is adversely affected.
For image data and voice data, data enhancement means such as turning, rotation and Gaussian noise are generally adopted to amplify the scale of training sample data so as to improve the overfitting phenomenon in the process of model training and improve the accuracy of the result of model prediction; however, for non-image data and non-speech data, the scale of the sample data cannot be amplified by the data enhancement means, and thus, in the process of performing model training by using the non-image data and the non-speech data, an over-fitting phenomenon easily occurs, and a prediction result is inaccurate.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data training method, apparatus, device, and storage medium, so as to achieve the purpose of performing scale expansion on sample data by using a data enhancement means in a process of performing model training using non-image data and non-speech data, thereby improving a phenomenon of model overfitting and improving accuracy of model prediction.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
in one aspect, an embodiment of the present invention provides a data training method, where the method includes:
acquiring sample data in an original training data set;
preprocessing the sample data to obtain positive sample data and negative sample data;
respectively traversing all column features contained in the positive sample data and the negative sample data aiming at the positive sample data and the negative sample data;
randomly disordering each column feature in all column features aiming at all column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data;
adding the new positive sample data and the new negative sample data to the original training data set to obtain a new training data set;
and performing model training by using the new training data set.
Optionally, the traversing all column features included in the positive sample data and the negative sample data respectively for the positive sample data and the negative sample data includes:
respectively traversing all column characteristics contained in the positive sample data with a first preset proportion and the negative sample data with a second preset proportion;
the first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
Optionally, the method according to claim 1, wherein said traversing all column features included in the positive sample data and the negative sample data respectively for the positive sample data and the negative sample data comprises:
respectively traversing all column characteristics contained in the positive sample data and the negative sample data which meet a third preset proportional relation aiming at the positive sample data and the negative sample data which meet the third preset proportional relation;
wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
The step of traversing all column features contained in the positive sample data and the negative sample data respectively aiming at the positive sample data and the negative sample data comprises the following steps:
and traversing all column features contained in all positive sample data and all negative sample data respectively aiming at all positive sample data and all negative sample data.
In another aspect, an embodiment of the present invention provides a data training apparatus, where the apparatus includes:
the acquisition module is used for acquiring sample data in the original training data set;
the preprocessing module is used for preprocessing the sample data to obtain positive sample data and negative sample data;
the traversal feature module is used for traversing all column features contained in the positive sample data and the negative sample data respectively;
the processing module is used for randomly disordering each column feature in all the column features aiming at all the column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data;
an adding module, configured to add the new positive sample data and the new negative sample data to the original training data set to obtain a new training data set;
and the training module is used for carrying out model training by utilizing the new training data set.
Optionally, the traversal feature module is specifically configured to traverse all column features included in the positive sample data of the first preset proportion and the negative sample data of the second preset proportion, respectively, for the positive sample data of the first preset proportion and the negative sample data of the second preset proportion;
the first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
Optionally, the traversal feature module is specifically configured to traverse all column features included in the positive sample data and the negative sample data that satisfy a third preset proportional relationship, respectively, for the positive sample data and the negative sample data that satisfy the third preset proportional relationship;
wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
Optionally, the feature traversing module is specifically configured to traverse all column features included in all the positive sample data and all the negative sample data, respectively.
In another aspect, an embodiment of the present invention provides a data training apparatus, including a processor and a memory;
the memory for storing a computer program;
the processor, when calling and executing the computer program stored in the memory, implementing the method of any of claims 1 to 4.
In another aspect, an embodiment of the present invention provides a storage medium, in which computer-executable instructions are stored, and when being loaded and executed by a processor, the storage medium implements the method according to any one of claims 1 to 4.
Based on the data training method, device, equipment and storage medium provided by the embodiment of the invention, sample data in an original training data set is obtained; preprocessing sample data to obtain positive sample data and negative sample data; respectively traversing all column characteristics contained in the positive sample data and the negative sample data aiming at the positive sample data and the negative sample data; randomly disorganizing each column characteristic in all column characteristics aiming at all column characteristics contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data; adding new positive sample data and new negative sample data into the original training data set to obtain a new training data set; model training is performed using the new training data set. In the scheme provided by the embodiment of the invention, the features in each sample data are randomly disordered and recombined, so that N features are mutually independent, each feature is subjected to normal distribution, and non-image and non-voice data can be subjected to data enhancement based on the processing, so that the data set of the data is effectively expanded, the phenomenon of model overfitting can be effectively improved when the data is used for data training, and the accuracy of model prediction is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a data training method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a data training apparatus according to an embodiment of the present invention;
fig. 3 is a block diagram of a data training apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
According to the background art, the scale of sample data cannot be amplified by a data enhancement means in the process of model training of non-image data and non-voice data, so that the problems of overfitting phenomenon and inaccurate prediction result easily occur in the process of model training by using the non-image data and the non-voice data are caused.
Therefore, embodiments of the present invention provide a data training method, an apparatus, a device, and a storage medium, so as to achieve the purpose that in the process of performing model training using non-image data and non-speech data, sample data scale amplification can be performed by a data enhancement means, thereby improving a phenomenon of model overfitting and improving accuracy of model prediction.
Referring to fig. 1, a flow chart of a data training method according to an embodiment of the present invention is shown. The method comprises the following steps:
s101: sample data in the original training data set is acquired.
In the process of implementing S101 specifically, all sample data in the original training data set may be acquired, or a part of sample data in the original training data set may also be acquired.
S102: and preprocessing the sample data to obtain positive sample data and negative sample data.
In the process of implementing S102 specifically, the following preprocessing may be performed based on the sample data obtained by executing S101:
first, the sample data obtained by executing S101 is subjected to a screening process, and abnormal data in the sample data is removed.
Secondly, the standard processing is carried out on the sample data, the attribute of the sample data is zoomed to a certain specified range, the sample data is converted into the data with the mean value of zero and the variance of one, and each feature in the sample data is made to obey the Gaussian normal distribution.
And finally, carrying out feature coding processing on the sample data, converting the attribute of the numerical type in the sample data into the attribute of a Boolean value, and setting a threshold value as a separation point for dividing the attribute value into 0 and 1. Optionally, in a specific implementation process, sample data with an attribute value of 1 may be referred to as positive sample data, and sample data with an attribute value of 0 may be referred to as negative sample data.
S103: and traversing all column features contained in the positive sample data and the negative sample data respectively.
In the specific implementation of S103, there may be a plurality of implementation schemes.
Optionally, the first scheme is: and traversing all column characteristics contained in the positive sample data with the first preset proportion and the negative sample data with the second preset proportion respectively.
The first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
It should be noted that the first preset ratio and the second preset ratio may be the same value or different values. Of course, the first preset ratio may be a value greater than the second preset ratio, or may be a value less than the second preset ratio, which is not limited herein.
The second scheme is as follows: and traversing all column characteristics contained in the positive sample data and the negative sample data which meet the third preset proportional relation respectively according to the positive sample data and the negative sample data which meet the third preset proportional relation.
Wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
It should be noted that the third preset proportion may be a proportion value obtained by dividing the number of the positive sample data used for traversal by the number of the negative sample data used for traversal, or may be a proportion value obtained by dividing the number of the negative sample data used for traversal by the number of the positive sample data used for traversal.
The third scheme is as follows: and traversing all column features contained in all positive sample data and all negative sample data respectively.
It should be further noted that, in the above three schemes, a specific scheme may be selected according to an actual scene application requirement for implementation, and in implementation, the preset ratio may also be set according to the actual scene application requirement, for example, when the total positive sample data is less than the total negative sample data, a third preset ratio obtained by dividing the number of the positive sample data used for traversal by the number of the negative sample data used for traversal may be set as a larger numerical value, which is, of course, only described as an example here.
S104: and randomly disordering each column feature in all column features aiming at all column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data.
In the process of implementing S104, randomly randomizing and recombining each column feature of all column features included in the positive sample data to obtain new positive sample data, and randomly randomizing and recombining each column feature of all column features included in the negative sample data to obtain new negative sample data.
In a specific implementation, shuffle function of python may be used for random shuffle, but other ways may also be used.
It should be noted that, in the process of random scrambling, the current row features to be scrambled are randomly scrambled, and the randomly scrambled features are still in the current row.
To facilitate understanding of the above-described aspect regarding the random shuffle feature, the following description is given by way of example only, and of course the following description is given by way of example only.
For example, the positive sample data includes 3 columns of features, the first column of features, the second column of features, and the third column of features are randomly scrambled at the same time or at different times (for example, sequentially performed in the order of the first column, the second column, and the third column), the features that are randomly scrambled and are originally in the first column are still in the first column, the features that are randomly scrambled and are originally in the second column are still in the second column, and the features that are randomly scrambled and are originally in the third column are still in the third column.
It should be noted that, each column feature of all column features included in the sample data is randomly scrambled, so that the features included in the randomly scrambled sample data are all independent from each other, and the subsequent amplification of the sample data scale by a data enhancement means is facilitated.
S105: and adding the new positive sample data and the new negative sample data into the original training data set to obtain a new training data set.
In the process of implementing S102 specifically, new positive sample data and new negative sample data may be added to the original training data set separately, or the new positive sample data and the new negative sample data may be mixed and then added to the original training data set.
S106: model training is performed using the new training data set.
In the scheme provided by the embodiment of the invention, the features in each sample data are randomly disordered and recombined, so that N features are mutually independent, each feature is subjected to normal distribution, and non-image and non-voice data can be subjected to data enhancement based on the processing, so that the data set of the data is effectively expanded, the phenomenon of model overfitting can be effectively improved when the data is used for data training, and the accuracy of model prediction is improved.
Based on the data training method disclosed by the embodiment of the invention, correspondingly, the embodiment of the invention also discloses a data training device. Referring to fig. 2, a block diagram of a data training apparatus according to an embodiment of the present invention is shown.
The data training device includes: an acquisition module 201, a pre-processing module 202, a traversal features module 203, a processing module 204, an addition module 205, and a training module 206.
The obtaining module 201 is configured to: sample data in the original training data set is acquired.
The preprocessing module 202 is configured to: and preprocessing the sample data to obtain positive sample data and negative sample data.
The traversal features module 203 is to: and traversing all column features contained in the positive sample data and the negative sample data respectively.
The processing module 204 is configured to: and randomly disordering each column feature in all column features aiming at all column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data.
The adding module 205 is configured to: and adding the new positive sample data and the new negative sample data into the original training data set to obtain a new training data set.
The training module 206 is configured to: model training is performed using the new training data set.
Optionally, the traversal feature module 203 is specifically configured to: and traversing all column characteristics contained in the positive sample data with the first preset proportion and the negative sample data with the second preset proportion respectively.
The first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
Alternatively, the traversal features module 203 is specifically configured to: and traversing all column characteristics contained in the positive sample data and the negative sample data which meet the third preset proportional relation respectively according to the positive sample data and the negative sample data which meet the third preset proportional relation.
Wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
Alternatively, the traversal features module 203 is specifically configured to: and traversing all column features contained in all positive sample data and all negative sample data respectively.
For a specific implementation principle of each module in the data training device disclosed in the embodiment of the present invention, reference may be made to corresponding contents in the data training method disclosed in the embodiment of the present invention, and details are not described here again.
Based on the data training device provided by the embodiment of the invention, the acquisition module acquires sample data in the original training data set; the preprocessing module preprocesses the sample data to obtain positive sample data and negative sample data; the traversal characteristic module respectively traverses all column characteristics contained in the positive sample data and the negative sample data aiming at the positive sample data and the negative sample data; the processing module randomly scrambles each column feature in all column features respectively aiming at all column features contained in the positive sample data and the negative sample data, and recombines the column features to obtain new positive sample data and new negative sample data; the adding module adds new positive sample data and new negative sample data to the original training data set to obtain a new training data set; the training module performs model training using the new training data set. In the scheme provided by the embodiment of the invention, the features in each sample data are randomly disordered and recombined, so that N features are mutually independent, each feature is subjected to normal distribution, and non-image and non-voice data can be subjected to data enhancement based on the processing, so that the data set of the data is effectively expanded, the phenomenon of model overfitting can be effectively improved when the data is used for data training, and the accuracy of model prediction is improved.
Based on the data training method and the data training device disclosed by the embodiment of the invention, the embodiment of the invention also discloses data training equipment. Referring to fig. 3, a block diagram of a data training apparatus according to an embodiment of the present invention is shown.
The data training apparatus includes: a processor 301 and a memory 302.
A memory 302 for storing a computer program.
The processor 301 is configured to call and execute the computer program stored in the memory 302 to implement any one of the data training methods disclosed above in the embodiments of the present invention.
Based on the data training method, the data training device and the data training equipment disclosed by the embodiment of the invention, the embodiment of the invention also discloses a storage medium.
The storage medium has stored therein computer-executable instructions. The computer executable instructions, when loaded and executed by a processor, implement any of the data training methods disclosed above in embodiments of the invention.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method of data training, the method comprising:
acquiring sample data in an original training data set;
preprocessing the sample data to obtain positive sample data and negative sample data;
respectively traversing all column features contained in the positive sample data and the negative sample data aiming at the positive sample data and the negative sample data;
randomly disordering each column feature in all column features aiming at all column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data;
adding the new positive sample data and the new negative sample data to the original training data set to obtain a new training data set;
and performing model training by using the new training data set.
2. The method according to claim 1, wherein said traversing all column features contained in said positive and negative sample data for said positive and negative sample data, respectively, comprises:
respectively traversing all column characteristics contained in the positive sample data with a first preset proportion and the negative sample data with a second preset proportion;
the first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
3. The method according to claim 1, wherein said traversing all column features contained in said positive and negative sample data for said positive and negative sample data, respectively, comprises:
respectively traversing all column characteristics contained in the positive sample data and the negative sample data which meet a third preset proportional relation aiming at the positive sample data and the negative sample data which meet the third preset proportional relation;
wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
4. The method according to claim 1, wherein said traversing all column features contained in said positive and negative sample data for said positive and negative sample data, respectively, comprises:
and traversing all column features contained in all positive sample data and all negative sample data respectively aiming at all positive sample data and all negative sample data.
5. A data training apparatus, the apparatus comprising:
the acquisition module is used for acquiring sample data in the original training data set;
the preprocessing module is used for preprocessing the sample data to obtain positive sample data and negative sample data;
the traversal feature module is used for traversing all column features contained in the positive sample data and the negative sample data respectively;
the processing module is used for randomly disordering each column feature in all the column features aiming at all the column features contained in the positive sample data and the negative sample data respectively, and recombining to obtain new positive sample data and new negative sample data;
an adding module, configured to add the new positive sample data and the new negative sample data to the original training data set to obtain a new training data set;
and the training module is used for carrying out model training by utilizing the new training data set.
6. The apparatus of claim 5,
the traversal feature module is specifically configured to traverse all column features included in the positive sample data of the first preset proportion and the negative sample data of the second preset proportion, respectively, for the positive sample data of the first preset proportion and the negative sample data of the second preset proportion;
the first preset proportion indicates the proportion of the number of the positive sample data used for traversal to all the number of the positive sample data, and the second preset proportion indicates the proportion of the number of the negative sample data used for traversal to all the number of the negative sample data.
7. The apparatus of claim 5,
the traversal feature module is specifically configured to traverse all column features included in the positive sample data and the negative sample data that satisfy a third preset proportional relationship, respectively;
wherein the third preset proportion indicates a proportion between the number of positive sample data used for traversal and the number of negative sample data used for traversal.
8. The apparatus of claim 5,
the traversal feature module is specifically configured to traverse all column features included in all the positive sample data and all the negative sample data, respectively.
9. A data training device comprising a processor and a memory;
the memory for storing a computer program;
the processor, when calling and executing the computer program stored in the memory, implementing the method of any of claims 1 to 4.
10. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, carry out a method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011055438.1A CN112052915B (en) | 2020-09-29 | 2020-09-29 | Data training method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011055438.1A CN112052915B (en) | 2020-09-29 | 2020-09-29 | Data training method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112052915A true CN112052915A (en) | 2020-12-08 |
CN112052915B CN112052915B (en) | 2024-02-13 |
Family
ID=73605073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011055438.1A Active CN112052915B (en) | 2020-09-29 | 2020-09-29 | Data training method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112052915B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762423A (en) * | 2021-11-09 | 2021-12-07 | 北京世纪好未来教育科技有限公司 | Data processing and model training method and device, electronic equipment and storage medium |
WO2022252079A1 (en) * | 2021-05-31 | 2022-12-08 | 京东方科技集团股份有限公司 | Data processing method and apparatus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109887541A (en) * | 2019-02-15 | 2019-06-14 | 张海平 | A kind of target point protein matter prediction technique and system in conjunction with small molecule |
CN111275491A (en) * | 2020-01-21 | 2020-06-12 | 深圳前海微众银行股份有限公司 | Data processing method and device |
US20200210899A1 (en) * | 2017-11-22 | 2020-07-02 | Alibaba Group Holding Limited | Machine learning model training method and device, and electronic device |
-
2020
- 2020-09-29 CN CN202011055438.1A patent/CN112052915B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200210899A1 (en) * | 2017-11-22 | 2020-07-02 | Alibaba Group Holding Limited | Machine learning model training method and device, and electronic device |
CN109887541A (en) * | 2019-02-15 | 2019-06-14 | 张海平 | A kind of target point protein matter prediction technique and system in conjunction with small molecule |
CN111275491A (en) * | 2020-01-21 | 2020-06-12 | 深圳前海微众银行股份有限公司 | Data processing method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022252079A1 (en) * | 2021-05-31 | 2022-12-08 | 京东方科技集团股份有限公司 | Data processing method and apparatus |
CN113762423A (en) * | 2021-11-09 | 2021-12-07 | 北京世纪好未来教育科技有限公司 | Data processing and model training method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112052915B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103347009B (en) | A kind of information filtering method and device | |
CN112052915A (en) | Data training method, device, equipment and storage medium | |
EP3719708A1 (en) | Model test method and device | |
CN110490203B (en) | Image segmentation method and device, electronic equipment and computer readable storage medium | |
CN110084155B (en) | Method, device and equipment for counting dense people and storage medium | |
CN115222630A (en) | Image generation method, and training method and device of image denoising model | |
CN110362563A (en) | The processing method and processing device of tables of data, storage medium, electronic device | |
CN110210278A (en) | A kind of video object detection method, device and storage medium | |
CN109241739B (en) | API-based android malicious program detection method and device and storage medium | |
CN111026087B (en) | Weight-containing nonlinear industrial system fault detection method and device based on data | |
CN111611532B (en) | Character relation completion method and device and electronic equipment | |
CN111861931A (en) | Model training method, image enhancement method, model training device, image enhancement device, electronic equipment and storage medium | |
CN111769984A (en) | Method for adding nodes in block chain network and block chain system | |
CN111539206A (en) | Method, device and equipment for determining sensitive information and storage medium | |
CN113114489B (en) | Network security situation assessment method, device, equipment and storage medium | |
CN112913253A (en) | Image processing method, apparatus, device, storage medium, and program product | |
CN114759904A (en) | Data processing method, device, equipment, readable storage medium and program product | |
CN107948739B (en) | Method and device for calculating number of users for internet protocol television reuse | |
CN111046337A (en) | Data interval value processing method and device, equipment and storage medium | |
CN111444756A (en) | Dangerous driving scene identification platform, method and storage medium | |
CN111476743A (en) | Digital signal filtering and image processing method based on fractional order differentiation | |
CN117828900B (en) | Impurity removal reminding method, system and medium applied to slab rolling | |
CN114611107A (en) | Android malicious software classification method based on super-resolution characteristic image | |
CN111353944A (en) | Image reconstruction method and device and computer readable storage medium | |
CN116228568A (en) | Image defect data augmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |