CN110020683A - Gear sample data base construction method - Google Patents
Gear sample data base construction method Download PDFInfo
- Publication number
- CN110020683A CN110020683A CN201910263160.8A CN201910263160A CN110020683A CN 110020683 A CN110020683 A CN 110020683A CN 201910263160 A CN201910263160 A CN 201910263160A CN 110020683 A CN110020683 A CN 110020683A
- Authority
- CN
- China
- Prior art keywords
- data
- classification
- gear
- generator
- arbiter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
Abstract
The present invention provides a kind of gear sample data base construction method, first acquires actual gear parameter, determines label;Then generator and arbiter are constructed by neural network, and the initial value obedience of each layer weight coefficient is uniformly distributed;Then input noise sequence signal and the label of setting generate data into generator;Then data will be generated and original sampling data inputs the arbiter simultaneously and carries out classification judgement;By updating the weight of each neuron in generator and arbiter, dynamic equilibrium is reached;Then it is sorted in ascending order according to apart from size;And it is used from successively selecting the generation data of institute's difference amount to incorporate in raw sample data in the generation data queue of each classification as final tranining database for classifier.Its effect is: can meet the quantitative requirement of each classification training sample in database by generating new data in the case where sampled data is on the low side, avoid because data volume deficiency impacts the training effect of classifier.
Description
Technical field
The present invention relates to the database construction technologies in big data field, and in particular to a kind of gear sample database building
Method.
Background technique
Gear drastically influences the stability and safety of automobile as the critical component in automotive transmission.Gear train
System is complicated, non-linear, cannot directly reflect the relationship between mechanical breakdown and physical parameter.Many researchs are dedicated to improving gear
Safety, including improve its physical structure, find the relationship etc. between its each physical parameter and safety coefficient.
Traditional technology usually utilizes gear parameter to calculate its safety coefficient, is then obtained by assessment safety coefficient reliable
Property.In the two steps, it is related to a large amount of complicated equation and it is assumed that causing the error of reliability assessment larger.Therefore, sharp
Exploring the internal relation between reliability of gears and parameter with data driven technique is to solve the problems, such as effective way of reliability assessment
Diameter.
But when handling the industrial gear data of real world, exists and asked using the uneven study of data driven technique
Topic.Industrially, reliability of gears is generally divided into four classes: super reliability, high reliability, standard reliability and low reliability.Compared with
High reliability represent traveling gear higher safety, but simultaneously because mechanical resonance and cause more noises.Cause
This needs to be weighed between reliability and noise, this results in most of numbers of gears in the design process of transmission gear
According to concentrating on standard reliability.So that the gear data nonbalance being collected into, the instance number of ultrahigh reliability and low reliability
Less than other two classes.
In recent years, the classification problem of data nonbalance causes the extensive concern of people.This is because classifier tends to
It supports most of examples, leads to the mistake classification to a few examples.Therefore, it is necessary to generate more examples in these classifications,
To help classifier preferably learning data attribute.
Summary of the invention
To solve the above-mentioned problems, the present invention proposes a kind of gear sample data base construction method, for gear stability
Analysis provides more training sample data, while avoiding the problem that data nonbalance.
To achieve the goals above, the technical solution adopted in the present invention is as follows:
A kind of gear sample data base construction method, key be the following steps are included:
S1: acquiring the actual gear parameter of predetermined quantity, is determined according to safety coefficient corresponding to each group of gear parameter
Label, and count the classification number of label;
S2: a generator and an arbiter are constructed by neural network, and in the generator and the arbiter
The initial value obedience of each layer weight coefficient is uniformly distributed;
S3: input noise sequence signal and the label of setting generate data into the generator;
S4: the generation data of generator output and the resulting original sampling data of step S1 are inputted into the arbiter simultaneously
Carry out classification judgement;
S5: the classification results of arbiter are compared to obtain error in classification with setting value, are preset when error in classification is greater than
Target then updates the weight of each neuron in generator, then returns to S3 and generates new data;It is preset when error in classification is less than
Target then updates the weight of each neuron in arbiter, then returns to S4 and re-starts classification judgement, until it is flat to reach dynamic
Weighing apparatus;
S6: determining the center of each classification in original sampling data, then calculates each group and generates data to each class
Other centre distance, is sorted in ascending order according to apart from size, obtains the generation data queue of each classification;
S7: according to the sample of the size of data sample needed for each classification and the corresponding original sampling data of the category
Number, from the generation data involvement raw sample data for successively selecting institute's difference amount in the generation data queue of each classification
It is used as final tranining database for classifier.
Optionally, in the case where the implicit layer number of the generator and the arbiter is greater than 1, each layer in step S2
The initial value of weight coefficient is obeyedBe uniformly distributed, wherein WijIt indicates the in i-th layer
The weight coefficient of j node, hiIndicate the number of i-th layer of interior joint, hi+1Indicate the number of i+1 layer interior joint.
Optionally, in the case where the implicit layer number of the generator and the arbiter is equal to 1, each layer in step S2
The initial value of weight coefficient is obeyedBe uniformly distributed, wherein WijIndicate j-th node in i-th layer
Weight coefficient, hiIndicate the number of i-th layer of interior joint.
Optionally, gear parameter described in step S1 include 85 gear feature data and 2 safety coefficients, described 85
A gear feature data include tooth load, application factor KA, internal dynamic factor Kv, facewidth load factor KHβ,KFβ, end load
Factor KHα,KFα, engage loading coefficient Kγ, 2 safety coefficients are bending safety coefficient SFWith touch-safe coefficient SH。
Optionally, the generator is made of 4 layers of neural network, the number of neuron is followed successively by 128,1024,256,
87 neurons of the 87, the 4th layer of neural network correspond to output for 85 parameters and 2 safety coefficients, in each layer of neural network
Activation primitive be Relu function.
Optionally, the arbiter is made of 3 layers of neural network, and the number of neuron is followed successively by 256,1024,128,
Activation primitive in each layer of neural network is leaky_Relu function.
Optionally, input noise sequence signal obedience is uniformly distributed or Gaussian Profile in step S3.
Optionally, in step S6 according toCalculate l
Original sampling data X in a classificationlMean μlWith covariance Covl, whereinIndicate the jth group data in first of classification;
dlIndicate the group number of the data in first of classification;
Then according toI-th of generation data is calculated to arrive
Calculate the distance of first of classification.
Optionally, it is additionally provided with boundary constraint module in the output end of the generator, the data that generator generates first pass through
It crosses after boundary constraint and is output in the arbiter again.
The beneficial effects of the present invention are:
It can be met in database in the case where sampled data is on the low side by generating new data based on the above method
The data of the quantitative requirement of each classification training sample, generation have similar distribution, and each class with original sampling data
Other data sample quantity can maintain unanimously, to be able to solve the problem of gear data sampling difficulty, avoid because of data volume
Deficiency impacts the training effect of classifier, provides reliable guarantee for gear safe prediction.
Detailed description of the invention
Fig. 1 is system architecture diagram of the invention;
Fig. 2 is gear safety coefficient forecasting system architecture diagram;
Fig. 3 is initial data and generates data to the influence relational graph of bending safety coefficient distribution;
Fig. 4 is initial data and generates data to the influence relational graph of contact safety coefficient distribution.
Specific embodiment
The embodiment of technical solution of the present invention will be described in detail below.Following embodiment is only used for clearer
Ground illustrates technical solution of the present invention, therefore is only used as example, and not intended to limit the protection scope of the present invention.
It should be noted that unless otherwise indicated, technical term or scientific term used in this application should be this hair
The ordinary meaning that bright one of ordinary skill in the art are understood.
As shown in Figure 1, the present embodiment discloses a kind of gear sample data base construction method, comprising the following steps:
S1: acquiring the actual gear parameter of predetermined quantity, is determined according to safety coefficient corresponding to each group of gear parameter
Label, and count the classification number of label;
In the present embodiment, we obtain gear data from the test equipment of a gearbox company, to examine its performance.
The gearbox test platform simulates operation vehicle in the real world, acquires gear parameter value.Test platform by power engine,
Four bearing, transmission and bus communication main component compositions.Power engine provides energy for transmission operation.Bearing is for adjusting
Save gear rotational speed.Testing gears are mounted in test gearbox.Transmission state is sent to computer by bus communication.In test
Qualified transmission gear parameter is considered as initial data.Here gear parameter includes 85 gear feature data and 2 safety systems
Number, 85 gear feature data include tooth load, application factor KA, internal dynamic factor Kv, facewidth load factor KHβ,
KFβ, end load factor KHα,KFα, engage loading coefficient Kγ, 2 safety coefficients are bending safety coefficient SFAnd touch-safe
Coefficient SH.Relationship between the reliability and safety coefficient of gear is as shown in table 1.There is no limit for maximum value.In reliability of gears
In ranking, the safety factor with lower grade evaluation of estimate is occupied an leading position.For example, SH=0.9 and SF=1.8, although SF
Value belongs to the 2nd class, but according to SHFrom the point of view of, reliability of gears belongs to the 4th class.
Minimum safety factor in 1 gear security level of table
S2: a generator and an arbiter are constructed by neural network, and in the generator and the arbiter
The initial value obedience of each layer weight coefficient is uniformly distributed;
In the present embodiment, the generator is made of 4 layers of neural network, the number of neuron is followed successively by 128,1024,
256, the corresponding output of 87 neurons of the 87, the 4th layer of neural network is 85 parameters and 2 safety coefficients, each layer of nerve net
Activation primitive in network is Relu function.The arbiter is made of 3 layers of neural network, the number of neuron is followed successively by 256,
1024,128, the activation primitive in each layer of neural network is leaky_Relu function.The initial value of each layer weight coefficient is obeyedBe uniformly distributed, wherein WijIndicate the weight coefficient of j-th of node in i-th layer, hi
Indicate the number of i-th layer of interior joint, hi+1Indicate the number of i+1 layer interior joint.
S3: input noise sequence signal and the label of setting generate data into the generator, here input noise sequence
Column signal obedience is uniformly distributed or Gaussian Profile;
S4: the generation data of generator output and the resulting original sampling data of step S1 are inputted into the arbiter simultaneously
Carry out classification judgement;
S5: the classification results of arbiter are compared to obtain error in classification with setting value, are preset when error in classification is greater than
Target then updates the weight of each neuron in generator, then returns to S3 and generates new data;It is preset when error in classification is less than
Target then updates the weight of each neuron in arbiter, then returns to S4 and re-starts classification judgement, until it is flat to reach dynamic
Weighing apparatus;
S6: determining the center of each classification in original sampling data, then calculates each group and generates data to each class
Other centre distance, is sorted in ascending order according to apart from size, obtains the generation data queue of each classification;In the present embodiment,
According to) calculate original sampling data X in first of classificationl
Mean μlWith covariance Covl, whereinIndicate the jth group data in first of classification;dlIndicate the data in first of classification
Group number;
Then according toI-th of generation data is calculated to arrive
Calculate the distance of first of classification;
S7: according to the sample of the size of data sample needed for each classification and the corresponding original sampling data of the category
Number, from the generation data involvement raw sample data for successively selecting institute's difference amount in the generation data queue of each classification
It is used as final tranining database for classifier.
By Fig. 1 it can also be seen that in the specific implementation, being additionally provided with boundary constraint mould in the output end of the generator
Block, the data that generator generates are output in the arbiter again after first passing through boundary constraint.
The boundary constraint module can use constraints model, specifically describe are as follows:
Yi h=Ah oXi h+δh;
Yi l=Al oXi l+δl;
The parameter type generated in data is divided into XhAnd XlTwo parts correspond to Y in new datahAnd Yl, Xi hIt indicates
XhIn the i-th seed type parameter vector, Xi lIndicate XlIn the i-th seed type parameter vector, Yi hIndicate YhIn the i-th type
The parameter vector of type, Yi lIndicate YlIn the i-th seed type parameter vector, AhFor XhCorresponding coefficient matrix, andδhFor XhCorresponding bias vector, AlFor XlCorresponding coefficient matrix, andδlFor XlIt is corresponding inclined
Difference vector, o indicate that corresponding element is multiplied in vector.
About the importance of weight coefficient initial value, compared and analyzed below by following experiment, at the beginning of Fig. 2 indicates weight
Be worth random distribution generate data result, Fig. 3 indicate initial weight according toIt is distributed the data generated
As a result, wherein m is the quantity of input layer, Fig. 3 indicates the data result of the weight forming initial fields designed according to this method.
Figure it is seen that random initial weight is easy to produce high chaos, there are convergence problems for arbiter and generator.From Fig. 3 and figure
4 can be seen that arbiter and generator by initial setting up nerve weight, realize the convergence of Relative steady-state, but of the invention
The convergence rate of the initial weight design method of proposition faster, can restrain, stability is more preferable before and after the 53rd iteration.
In order to further compare the performance for the database that this method constructs, we use imbalanced-learn API
0.3.0 4 kinds of common oversampler methods in tool box: random over-sampling (ROS), synthesize a small number of over-sampling techniques (SMOTE), from
It adapts to the synthesis method of sampling (ADASYN) and is compared by the arest neighbors (SMOTEENN) of editor.It defines proposed by the present invention
Method is (CGAN-MBL), sets the data creation method not being tagged in generator as (GAN-MBL), and using simple
Precision index (P) and recall rate index (R), and comprehensive F-Measure index (F-M) and G-Mean (G-M) are assessed point
The performance of class device.
Wherein:
TP indicates the positive instance number correctly classified;FP indicates the quantity of the negative examples of mistake classification;TN indicates correct point
The quantity of the negative examples of class;FN is indicated by the quantity of the positive example of mistake classification.P, R, F-M, G-M value are higher, classifying quality
Better.
Table 2 uses the result average value of different classifications device.
Table 2 is the comparing result of each algorithm, from table 2 it can be seen that a variant of the SMOTEENN as SMOTE, right
The treatment effeciency of gear data is very low.The reason is that the majority of gear data and a small number of intersections.SMOTEENN is deleted to test
The useful edge example of the inheritance of data.Compared with GAN-MBL, label information facilitates the limit distribution of learning classification,
It is higher to the treatment effeciency of gear data in terms of classifier.
Due to showing only mean accuracy here, it was noted that different technologies has different standard deviations.In order into one
Step illustrates the improvement of statistical method, we compare the average value under Gauss assumes using the T-test of Welch, to assess exploitation
Importance.In addition, we, which also achieve Mann Whitney U test, compares two examples without any hypothesis distribution,
Observe its conspicuousness.The inspection result of table 3 and table 4 are significance when being 0.05 indices.
WELCH ' the s t inspection result under 0.05 significance of table 3
It can be seen from Table 3 that no matter what classifier is, in t-test the and Mann Whitney U test of Welch
In, the test result between the method proposed and other 3 kinds of methods (ADASYN, SMOTEENN, GAN-MBL) is similar.
This illustrates that these three methods have similar performance to gear data.
Table 4: the MANN WHITNEY U inspection result under 0.05 significance
In addition, being compared using method proposed in this paper and other 5 kinds of methods, realizes and have in the tool box Sciket-learn
There are the support vector machines of 2 different IPs (RBF core and linear kernel, be expressed as SVM1 and SVM2) and the SVM (line of 2 variations
Property SVM and-SVM) classify to generated data.The experimental setup of this 4 SVM classifiers is as follows:
1) SVM1:kernel=RBF;γ=10, the radius influenced for the example that supporting vector is chosen are inverse;C=1.0, root
According to the simplicity adjusting training misclassification of decision curved surface.
2) SVM2:kernel=linear;γ=10, C=1.0.
3) linear SVM: penalty is L2 Regularization function;Loss function is square of hinge;
4) v-SVM:v=0.2, wherein influencing the marginal error upper bound and supporting vector lower bound, determining allows mistake classification
Training sample number.
As can be seen from Table 5, this method uses 4 kinds of different SVM classifiers, and performance is better than other 5 kinds of methods.
Table 5: distinct methods use the index average value of SVM classifier.
For SVM1, RBF core has been fitted the Gaussian curve around data point.The incomplete Gaussian distributed of gear parameter,
Cause the classification results of SVM1 poorer than other 3 SVM classifiers.In addition, ROS, SMOTE, ADASYN and SMOTEENN are original
Data are generated between the row of data, without considering any data distribution.What GAN-MBL and CGAN-MBL after considering distribution were generated
Data have obtained better classification results than other 4 kinds of methods.SVM2 and linear SVM show effect in gear data
Fruit is good.SVM2 and linear SVM realize that Linear Mapping is similar in support vector machines, but there are also differences.
Linear SVM has greater flexibility in punishment and loss function, can better adapt to great amount of samples.But
Due to the deficiency of gear data, so that the performance of linear kernel support vector machines is better than SVM2.In addition, multiclass is asked for SVM2
Topic is converted into binary classification problems, such as class 1-notclass 1, class 2-notclass 2.Linear SVM passes through calculating
Classification performance between any two class handles multi-class problem.Therefore, compared with linear SVM, SVM2 is being calculated again
There is better classification performance on miscellaneous degree.V-SVM an additional parameter is introduced in support vector machines come control support to
The quantity of amount, the parameter are also fine to the performance of gear data, but to the lower v of the support of gear data.
In order to further verify its validity, using random forest (RF) classifier to different generation techniques number generated
F-M the and G-M result of each classification is compared according to behind the place of progress, the data generated using this method are to raw data set
After being extended, the classification capacity of minority group's (such as the 1st class and the 4th class) is improved.Compared with other 5 kinds of methods,
The index classification result highest that CGAN-MBL is obtained on the average value of all categories, misclassification accuracy with higher.
In conclusion the present invention asks this subject for learning data imbalance present in reliability of gears assessment, propose
A kind of gear sample data base construction method, different from the traditional technology for using large amount of complex to calculate, we pass through study gear
The inherence of data solves the problems, such as transmission gear reliability assessment.The experimental results showed that this method is assessed in reliability of gears
Aspect is better than traditional method.A large amount of test demonstrates the validity using this method.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;
Although present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its
It is still possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features
It is equivalently replaced;And these are modified or replaceed, various embodiments of the present invention skill that it does not separate the essence of the corresponding technical solution
The range of art scheme should all cover within the scope of the claims and the description of the invention.
Claims (9)
1. a kind of gear sample data base construction method, it is characterised in that the following steps are included:
S1: acquiring the actual gear parameter of predetermined quantity, determines label according to safety coefficient corresponding to each group of gear parameter,
And count the classification number of label;
S2: a generator and an arbiter, and each layer in the generator and the arbiter are constructed by neural network
The initial value obedience of weight coefficient is uniformly distributed;
S3: input noise sequence signal and the label of setting generate data into the generator;
S4: the generation data of generator output and the resulting original sampling data of step S1 are inputted into the arbiter simultaneously and are carried out
Classification judgement;
S5: the classification results of arbiter are compared to obtain error in classification with setting value, when error in classification be greater than goal-selling,
The weight of each neuron in generator is then updated, S3 is then returned and generates new data;When error in classification be less than goal-selling,
The weight of each neuron in arbiter is then updated, S4 is then returned and re-starts classification judgement, until reaching dynamic equilibrium;
S6: determining the center of each classification in original sampling data, then calculates each group and generates data to each classification
Centre distance is sorted in ascending order according to apart from size, obtains the generation data queue of each classification;
S7: according to the size of data sample needed for each classification and the number of samples of the corresponding original sampling data of the category,
Make from successively selecting the generation data of institute's difference amount to incorporate in the raw sample data in the generation data queue of each classification
It is used for final tranining database for classifier.
2. gear sample data base construction method according to claim 1, it is characterised in that: in the generator and described
In the case that the implicit layer number of arbiter is greater than 1, the initial value of each layer weight coefficient is obeyed in step S2Be uniformly distributed, wherein WijIndicate the weight coefficient of j-th of node in i-th layer, hi
Indicate the number of i-th layer of interior joint, hi+1Indicate the number of i+1 layer interior joint.
3. gear sample data base construction method according to claim 1, it is characterised in that: in the generator and described
In the case that the implicit layer number of arbiter is equal to 1, the initial value of each layer weight coefficient is obeyed in step S2Be uniformly distributed, wherein WijIndicate the weight coefficient of j-th of node in i-th layer, hiIndicate i-th layer
The number of interior joint.
4. gear sample data base construction method according to claim 1 to 3, it is characterised in that: described in step S1
Gear parameter include 85 gear feature data and 2 safety coefficients, 85 gear feature data include tooth load,
Application factor KA, internal dynamic factor Kv, facewidth load factor KHβ,KFβ, end load factor KHα,KFα, engage loading coefficient Kγ,
2 safety coefficients are bending safety coefficient SFWith touch-safe coefficient SH。
5. gear sample data base construction method according to claim 4, it is characterised in that: the generator is by 4 layers
Neural network is constituted, and 87 neurons that the number of neuron is followed successively by the 128,1024,256,87, the 4th layer of neural network are corresponding
Output is 85 parameters and 2 safety coefficients, and the activation primitive in each layer of neural network is Relu function.
6. gear sample data base construction method according to claim 4, it is characterised in that: the arbiter is by 3 layers
Neural network is constituted, and the number of neuron is followed successively by 256,1024,128, and the activation primitive in each layer of neural network is
Leaky_Relu function.
7. gear sample data base construction method according to claim 1, it is characterised in that: input noise sequence in step S3
Column signal obedience is uniformly distributed or Gaussian Profile.
8. gear sample data base construction method according to claim 1, it is characterised in that: in step S6 according toCalculate original sampling data X in first of classificationlIt is equal
Value μlWith covariance Covl, whereinIndicate the jth group data in first of classification;dlIndicate the group of the data in first of classification
Number;
Then according toI-th of generation data is calculated to calculation l
The distance of a classification.
9. gear sample data base construction method according to claim 1, it is characterised in that: in the output of the generator
End is additionally provided with boundary constraint module, and the data that generator generates are output in the arbiter again after first passing through boundary constraint.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910263160.8A CN110020683A (en) | 2019-04-02 | 2019-04-02 | Gear sample data base construction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910263160.8A CN110020683A (en) | 2019-04-02 | 2019-04-02 | Gear sample data base construction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110020683A true CN110020683A (en) | 2019-07-16 |
Family
ID=67190414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910263160.8A Pending CN110020683A (en) | 2019-04-02 | 2019-04-02 | Gear sample data base construction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110020683A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596261A (en) * | 2018-04-28 | 2018-09-28 | 重庆青山工业有限责任公司 | Based on the gear parameter oversampler method for generating confrontation network model |
CN109214103A (en) * | 2018-09-25 | 2019-01-15 | 重庆青山工业有限责任公司 | The reliability of gears analysis system of confrontation network is generated based on boundary constraint |
-
2019
- 2019-04-02 CN CN201910263160.8A patent/CN110020683A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596261A (en) * | 2018-04-28 | 2018-09-28 | 重庆青山工业有限责任公司 | Based on the gear parameter oversampler method for generating confrontation network model |
CN109214103A (en) * | 2018-09-25 | 2019-01-15 | 重庆青山工业有限责任公司 | The reliability of gears analysis system of confrontation network is generated based on boundary constraint |
Non-Patent Citations (2)
Title |
---|
JIE LI,等: ""A Novel Generative Model With Bounded-GAN for Reliability Classification of Gear Safety"", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 * |
JIE LI,等: ""CGAN-MBL for Reliability Assessment With Imbalanced Transmission Gear Data"", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian | Approach for short-term traffic flow prediction based on empirical mode decomposition and combination model fusion | |
Cao et al. | The fault diagnosis of a switch machine based on deep random forest fusion | |
CN103489009B (en) | Mode identification method based on adaptive correction neutral net | |
US20150134578A1 (en) | Discriminator, discrimination program, and discrimination method | |
CN107730039A (en) | The method and system of distribution network load prediction | |
CN106779087A (en) | A kind of general-purpose machinery learning data analysis platform | |
CN109034054B (en) | Harmonic multi-label classification method based on LSTM | |
CN111191709B (en) | Continuous learning framework and continuous learning method of deep neural network | |
CN106326984A (en) | User intention identification method and device and automatic answering system | |
CN106781489A (en) | A kind of road network trend prediction method based on recurrent neural network | |
Srisaeng et al. | An adaptive neuro-fuzzy inference system for forecasting Australia's domestic low cost carrier passenger demand | |
CN105894125A (en) | Transmission and transformation project cost estimation method | |
CN107609671A (en) | A kind of Short-Term Load Forecasting Method based on composite factor evaluation model | |
CN110363230A (en) | Stacking integrated sewage handling failure diagnostic method based on weighting base classifier | |
CN110009030A (en) | Sewage treatment method for diagnosing faults based on stacking meta learning strategy | |
CN106022954A (en) | Multiple BP neural network load prediction method based on grey correlation degree | |
CN105005708B (en) | A kind of broad sense load Specialty aggregation method based on AP clustering algorithms | |
CN108053052A (en) | A kind of oil truck oil and gas leakage speed intelligent monitor system | |
CN107273505A (en) | Supervision cross-module state Hash search method based on nonparametric Bayes model | |
Zhang et al. | Robustness verification of swish neural networks embedded in autonomous driving systems | |
CN106096723A (en) | A kind of based on hybrid neural networks algorithm for complex industrial properties of product appraisal procedure | |
CN111967308A (en) | Online road surface unevenness identification method and system | |
CN107256461A (en) | A kind of electrically-charging equipment builds address evaluation method and system | |
Ratrout et al. | Factors affecting performance of parametric and non-parametric models for daily traffic forecasting | |
Dovbysh et al. | Estimation of Informativeness of Recognition Signs at Extreme Information Machine Learning of Knowledge Control System. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190716 |
|
WD01 | Invention patent application deemed withdrawn after publication |