CN110263447A - A kind of loading spectrum Extrapolation method based on shot and long term memory network - Google Patents

A kind of loading spectrum Extrapolation method based on shot and long term memory network Download PDF

Info

Publication number
CN110263447A
CN110263447A CN201910551153.8A CN201910551153A CN110263447A CN 110263447 A CN110263447 A CN 110263447A CN 201910551153 A CN201910551153 A CN 201910551153A CN 110263447 A CN110263447 A CN 110263447A
Authority
CN
China
Prior art keywords
loading spectrum
layer
shot
long term
term memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910551153.8A
Other languages
Chinese (zh)
Inventor
曾敬
向华荣
郑国峰
秦致远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Engineering Research Institute Co Ltd
Original Assignee
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Engineering Research Institute Co Ltd filed Critical China Automotive Engineering Research Institute Co Ltd
Priority to CN201910551153.8A priority Critical patent/CN110263447A/en
Publication of CN110263447A publication Critical patent/CN110263447A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The present invention relates to loading spectrum processing technology field, specially a kind of loading spectrum Extrapolation method based on shot and long term memory network, comprising the following steps: S100: load modal data is obtained;S200: load modal data is subjected to data processing, divides training set and test set;S300: it is based on LSTM algorithm, is trained using training set, obtains input sample data model;S400: extrapolation calculating is carried out to load modal data using test set by obtained input sample data model.A kind of loading spectrum Extrapolation method based on shot and long term memory network provided by the invention enables to extrapolation loading spectrum preferably to express the information of former loading spectrum, and can solve human factor in the prior art influences big, the incomplete problem of loading spectrum data representation.

Description

A kind of loading spectrum Extrapolation method based on shot and long term memory network
Technical field
The present invention relates to loading spectrum processing technology field, specially a kind of loading spectrum extrapolation based on shot and long term memory network Method.
Background technique
In the development process of automobile or components, reliability is required to progress actual road test and tests.According to damage The principle of equal effects, under known users use environment automobile load input, theoretically can by test site according to a certain percentage Various enhanced road surfaces are mixed, the load input under user's operating condition out is reappeared.It, can be shorter by the enhanced road surface at test site Reliability compliance test is completed in time, is achieved the purpose that reduce test period, is shortened the R&D cycle.For research and development cost and when Between the considerations of, general user's use environment automobile load input, loading spectrum acquisition will not be carried out on the basis of target mileage, and It is to be acquired by type according to user's road ratio distribution situation, under the premise of sample size is enough, carries out outside loading spectrum It pushes away, realizes that the loading spectrum under target mileage obtains.
Current more typical Extrapolation method includes: parameter extrapolation, by mileage quantile extrapolation, peak value (Peak Over Threshold, POT) extrapolation, rainflow matrix extrapolation [1-3] etc..Wherein the principle of parameter extrapolation is to obtain to carry Lotus composes equal, amplitude dimensional probability distribution function, based on probability density function and extrapolation target mileage, will tire out accordingly The product frequency is extrapolated.POT extrapolation thinks to obey peak Distribution more than the peak value of threshold value in loading spectrum time series, by right More than the fitting of the probability density function of the peak value of threshold value, extrapolated based on probability density function to peak value.Outside rainflow matrix Loading spectrum is first obtained rainflow matrix by rain-flow counting by pushing manipulation, and the threshold calculations of extrapolation is selected to pass through grade from rainflow matrix Then density obtains the limit by accumulation rainflow matrix and carries out rainflow matrix estimation extrapolation.The above method can be according to loading spectrum Feature or application purpose are selected, but often introduced when being fitted with distribution function or threshold value being arranged it is artificial because Element, and simple distribution function cannot express highly complex load spectrum information, extrapolation loading spectrum can be generated compared with original load spectrum It is lost compared with multi information, extrapolating results need to be then converted to input of the time-domain program spectrum as next step.
Summary of the invention
The invention is intended to provide a kind of loading spectrum Extrapolation method based on shot and long term memory network, extrapolation load is enabled to Spectrum preferably expresses the information of former loading spectrum, and can solve human factor in the prior art influences big, loading spectrum data representation not Complete problem.
In order to solve the above-mentioned technical problem, the application provides the following technical solutions:
A kind of loading spectrum Extrapolation method based on shot and long term memory network, comprising the following steps:
S100: load modal data is obtained;
S200: load modal data is subjected to data processing, divides training set and test set;
S300: it is based on LSTM algorithm, is trained using training set, obtains input sample data model;
S400: extrapolation calculating is carried out to load modal data using test set by obtained input sample data model.
In technical solution of the present invention, using LSTM shot and long term memory network algorithm model, using computer learning technology into The building of row sample data model and load modal data carry out extrapolation calculating, do not need that parameter artificially is arranged, the load postponed outside Spectrum can preferably retain the information of former loading spectrum on frequency domain figure and rain flow graph, can solve human factor shadow in the prior art It rings big, load modal data and retains incomplete problem.
Further, the unit of the LSTM algorithm includes input layer, output layer, forgets layer and state update step.
The content transmitted by forgeing layer to a upper unit carries out the processing of selectivity, is updated by state update step The memory state of upper unit transmission, is handled by data of the input layer to input, output layer according to the input of input layer, The memory state that state update step updates generates the output of active cell.
Further, the input layer controls output, the relational expression of the input layer are as follows: i by sigmoid functiont=σ (Wi·[ht-1, Xt]+bi), itFor the output of input layer, σ is sigmoid function, WiFor the weight of input layer, ht-1It is upper one The output of LSTM unit is as a result, XtFor the input of current LSTM unit, biFor the biasing of input layer.
By sigmoid function control output in the range of [0,1], control new information be added into number.
Further, the state update step is controlled by tanh function and is exported, the relational expression of state update step are as follows: For the output of state update step, tanh is tanh function, WcFor state update step Weight, bcFor the biasing of state update step.
In new memory state CtBefore generation, interim memory state can be first generatedIt is by upper unit ht-1Shadow It rings, is controlled and exported by nonlinear activation function tanh.
Further, the forgetting layer is controlled by sigmoid function and is exported, and forgets the relational expression of layer are as follows: ft=σ (Wf· [ht-1, Xt]+bf), ftFor the output for forgeing layer, WfFor the weight for forgeing layer, bfFor the biasing for forgeing layer.
It is similar with input layer, through the control output of sigmoid function in the range of [0,1], control the note of a upper unit State is recalled to the influence degree of current memory unit.
Further, the relational expression of the output layer are as follows:
Wherein, htFor the output of current LSTM unit, Ct-1For the memory state of a upper LSTM unit, CtFor current LSTM The memory state of unit, WoFor the weight of output layer, boFor the biasing of output layer.
Filter out redundancy present in unit, if having reached threshold value when output, just by the output of the valve with work as The calculated result of front layer is multiplied, and using obtained result as next layer of input;If not up to threshold value, " forgetting " output As a result.
Further, the S300 is specifically included:
S310: building sample data model structure;
S320: the data of the data exported after training and test set are compared, and calculate the cost function of two groups of data;
S330: being adjusted each layer weight and biasing according to the calculated result of S320, until cost function converge to.
Further, use Squared Error Loss as cost function in the S320.
Further, the data processing in the S200 includes one of normalization, standardization and regularization or a variety of.
Further, it is 100, time step 20ms that the LSTM algorithm, which defines neuronal quantity, and batch processing size is 60, Learning Step 0.0001.
Detailed description of the invention
Fig. 1 is the flow chart in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention;
Fig. 2 is the knot of a unit in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Structure schematic diagram;
Fig. 3 is the variation of the biasing in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Figure;
Fig. 4 is the variation of the weight in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Figure;
Fig. 5 is the original load of pebble path in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Lotus spectrum;
Fig. 6 is that Belgian road is original in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Loading spectrum;
Fig. 7 is that pebble path extrapolation carries in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Lotus spectrum;
Fig. 8 is Belgian road extrapolation in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Loading spectrum;
Fig. 9 is pebble path loading spectrum in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Extrapolation comparison diagram;
Figure 10 is Belgian road road in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Loading spectrum extrapolation comparison diagram;
Figure 11 is original load spectrum in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Rain flow graph;
Figure 12 is in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention outside LSTM method Push away loading spectrum rain flow graph;
Figure 13 is in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Expanechekov kernel function extrapolation rain flow graph;
Figure 14 is Circle core letter in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Number extrapolation rain flow graph;
Figure 15 is mean value kernel function in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Extrapolation rain flow graph;
Figure 16 is amplitude kernel function in a kind of loading spectrum Extrapolation method embodiment based on shot and long term memory network of the present invention Extrapolation rain flow graph.
Specific embodiment
It is further described below by specific embodiment:
Embodiment one
As shown in Figure 1, a kind of loading spectrum Extrapolation method based on shot and long term memory network of the present embodiment, comprising:
S100: load modal data is obtained;
S200: load modal data is subjected to data processing, divides training set and test set;Data processing in S200 includes One of normalization, standardization and regularization are a variety of.Regularization is used in the present embodiment.
S300: it is based on LSTM algorithm, is trained using training set, obtains input sample data model;S300 is specifically wrapped It includes:
S310: building sample data model structure;
S320: the data of the data exported after training and test set are compared, and calculate the cost function of two groups of data;
S330: being adjusted each layer weight and biasing according to the calculated result of S320, until cost function convergence.
Using Squared Error Loss as cost function in S320.
S400: extrapolation calculating is carried out to load modal data using test set by obtained input sample data model.
In the present embodiment, it is 100, time step 20ms that LSTM algorithm, which defines neural unit quantity, batch processing size It is 60, Learning Step 0.0001.As shown in Fig. 2, each neural unit of LSTM algorithm includes input layer, output layer, forgetting Layer and state update step.Input layer controls output by sigmoid function in the range of [0,1], to control new information The number being added into.The relational expression of input layer are as follows: it=σ (Wi·[ht-1, Xt]+bi), itFor the output of input layer, σ is Sigmoid function, WiFor the weight of input layer, ht-1For a upper LSTM unit output as a result, XtFor current LSTM unit Input, biFor the biasing of input layer.
State update step is controlled by tanh function and is exported, the relational expression of state update step are as follows: For the output of state update step, tanh is tanh function, WcFor state update step Weight, bcFor the biasing of state update step.
Forget layer and output is controlled by sigmoid function, forgets the relational expression of layer are as follows: ft=σ (Wf·[ht-1, Xt]+bf), ft For the output for forgeing layer, wfFor the weight for forgeing layer, bfFor the biasing for forgeing layer.It is similar with input layer, forget layer and passes through The control output of sigmoid function controls shadow of the memory state to current memory unit of a upper unit in the range of [0,1] The degree of sound.
Output layer filters out redundancy present in unit, if having reached threshold value when output, just by the defeated of the valve It is multiplied out with the calculated result of current layer, and using obtained result as next layer of input;If not up to threshold value, " lose Forget " output result relational expression are as follows:
Wherein, htFor the output of current LSTM unit, Ct-1For the memory state of a upper LSTM unit, CtFor current LSTM The memory state of unit, WoFor the weight of output layer, boFor the biasing of output layer.
Specifically, in the present embodiment, by taking the extrapolation of the loading spectrum of enhanced road surface as an example, at sample car respectively core wheel and damping Acceleration transducer, spring and part connecting rod position are installed on device, foil gauge is installed, several in certain proving ground are main special Sign is strengthened road and is acquired, and the loading spectrum of multi collect enhanced road surface is needed, and effective loading spectrum is finally taken to carry out rejecting surprise The pretreatments such as dissimilarity, elimination trend term, filtering.Various test traffic informations are as shown in table 1 below:
Mainly strengthen road road conditions in 1 test site of table:
In the present embodiment, LSTM extrapolation mould is established using the artificial intelligence Open-Source Tools TensorFlow issued based on Google The characteristics of type, TensorFlow is that calculating task is indicated using figure, and in figure a running node obtains 0 or multiple tensors It executes calculating, generates 0 or multiple tensors, each tensor is a typed Multidimensional numerical.TensorFlow is in meeting Figure is executed in the context of words, using tensor representation data, state is safeguarded by variable.
TensorFlow, pandas are imported by python environment first in the present embodiment, numpy module will utilize csv The time domain modal data that format is saved uses data-numpy.mean by pandas.read_csv function read-in programme (data))/numpy.std (data) function is standardized.It is by LSTM model encapsulation in TensorFlow LSTMCell module, it is 100, time step 20ms that neuronal quantity will be defined in the present embodiment, and batch processing size is 60, Learning Step is 0.0001, calls directly the module after initialization input and output interface, is instructed using the process of Fig. 1 Practice.In the present embodiment by NVIDIA provide CUDA as hardware support, training process ratio with about 20 times fastly of common CP U, In entire machine-learning process, by tensorboard it can be seen that in entire learning process parameter variation, such as Fig. 3 and Shown in Fig. 4, when study accuracy rate reaches 98%, the income for continuing study becomes smaller, can deconditioning.
It can be extrapolated using the hyper parameter model that training obtains, 10 times of extrapolating results of part road vertical acceleration are such as Shown in Fig. 5 to Fig. 8.
LSTM method extrapolated data and initial data spectrum curve are compared, part road spectrum curve such as Fig. 9 and Shown in Figure 10.
On spectrogram, extrapolation front and back data spectrum has good consistency in total form, utilizes Pearson correlation Y-factor method Y tests to consistency.Pearson correlation coefficient ρX, YIt is the degree for measuring array X and Y linear correlation, coefficient Value always between -1.0 to 1.0, the variable close to 0 is known as non-correlation, and being referred to as close to 1 or -1 has strong phase Guan Xing.Spectrum curve before and after extrapolating, the results are shown in Table after the inspection of Pearson correlation coefficient method is shown, the results are shown in Table 2.Through The spectrum curve of LSTM method extrapolation front and back data has strong correlation, illustrates that this method can have very the spectrum signature of initial data High learning rate.
The correlation of the extrapolation of table 2 front and back data spectrum curve
Road surface Pearson correlation coefficient
Pebble path 0.99
Belgian road 0.95
Become the long wave paths of wave square 0.97
Washboard road 0.96
Resonance road 0.97
Broken stone road 0.99
In order to which the technical effect to the application is verified, the application is also compared with existing Extrapolation method, right The characteristics of extrapolation of loading spectrum, several Extrapolation methods generallyd use, is as shown in table 3:
The comparison of several Extrapolation methods of table 3
Based on be respectively adopted in LSTM Extrapolation method and norm of nonparametric kernel density Extrapolation method expanechekov kernel function, Circle kernel function, mean value kernel function, amplitude kernel function extrapolate to loading spectrum collected on pebble path, several method Rain flow graph comparison, see Figure 11 to Figure 16.
The distribution that rain flow graph from Figure 11 to Figure 16 can be seen that norm of nonparametric kernel density extrapolation tends to " monokaryon " feature, LSTM method extrapolation distribution have " multicore " feature, the distribution situation of the latter and to it is each to the reproduction effect become estranged a little it is more preferable, LSTM method may be selected in Practical Project to extrapolate.
Embodiment two
The difference between this embodiment and the first embodiment lies in defining neuronal quantity is 200, time step in the present embodiment A length of 30ms, batch processing size are 80, Learning Step 0.0001.
The above are merely the embodiment of the present invention, the field that invention case study on implementation without being limited thereto is related to is known in scheme Specific structure and the common sense such as characteristic do not describe excessively herein, one skilled in the art know the applying date or preferential All ordinary technical knowledges of technical field that the present invention belongs to before Quan can know the prior art all in the field, and And there is the ability for applying routine experiment means before the date, what one skilled in the art can provide in the application Under enlightenment, this programme is improved and implemented in conjunction with self-ability, and some typical known features or known method should not become One skilled in the art implement the obstacle of the application.It should be pointed out that for those skilled in the art, not taking off Under the premise of from structure of the invention, several modifications and improvements can also be made, these also should be considered as protection scope of the present invention, These all will not influence the effect and patent practicability that the present invention is implemented.This application claims protection scope should be with its right It is required that content subject to, the records such as specific embodiment in specification can be used for explaining the content of claim.

Claims (10)

1. a kind of loading spectrum Extrapolation method based on shot and long term memory network, comprising the following steps: S100: obtaining loading spectrum number According to;S200: load modal data is subjected to data processing, divides training set and test set;S300: it is based on LSTM algorithm, uses instruction Practice collection to be trained, obtains input sample data model;S400: test set pair is used by obtained input sample data model Load modal data carries out extrapolation calculating.
2. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 1, it is characterised in that: institute The unit for stating LSTM algorithm includes input layer, output layer, forgets layer and state update step.
3. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 2, it is characterised in that: institute It states input layer and controls output, the relational expression of the input layer are as follows: i by sigmoid functiont=σ (Wi·[ht-1,Xt]+bi), itFor the output of input layer, σ is sigmoid function, WiFor the weight of input layer, ht-1For the output knot of a upper LSTM unit Fruit, XtFor the input of current LSTM unit, biFor the biasing of input layer.
4. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 3, it is characterised in that: institute It states state update step and output, the relational expression of state update step is controlled by tanh function are as follows: For the output of state update step, tanh is tanh function, WcFor the weight of state update step, bcFor state update step Biasing.
5. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 4, it is characterised in that: institute It states and forgets layer by the control output of sigmoid function, forget the relational expression of layer are as follows: ft=σ (Wf·[ht-1,Xt]+bf), ftTo forget The output of layer, WfFor the weight for forgeing layer, bfFor the biasing for forgeing layer.
6. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 5, it is characterised in that: institute State the relational expression of output layer are as follows:
ot=σ (Wo·[ht-1,Xt]+bo);
ht=ot*tanh(Ct);
Wherein, htFor the output of current LSTM unit, Ct-1For the memory state of a upper LSTM unit, CtFor current LSTM unit Memory state, WoFor the weight of output layer, boFor the biasing of output layer.
7. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 1, it is characterised in that: institute S300 is stated to specifically include:
S310: building sample data model structure;
S320: the data of the data exported after training and test set are compared, and calculate the cost function of two groups of data;
S330: being adjusted each layer weight and biasing according to the calculated result of S320, until cost function converge to.
8. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 7, it is characterised in that: institute It states in S320 using Squared Error Loss as cost function.
9. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 1, it is characterised in that: institute Stating the data processing in S200 includes one of normalization, standardization and regularization or a variety of.
10. a kind of loading spectrum Extrapolation method based on shot and long term memory network according to claim 2, it is characterised in that: It is 100, time step 20ms that the LSTM algorithm, which defines neuronal quantity, and batch processing size is 60, and Learning Step is 0.0001。
CN201910551153.8A 2019-06-24 2019-06-24 A kind of loading spectrum Extrapolation method based on shot and long term memory network Pending CN110263447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910551153.8A CN110263447A (en) 2019-06-24 2019-06-24 A kind of loading spectrum Extrapolation method based on shot and long term memory network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910551153.8A CN110263447A (en) 2019-06-24 2019-06-24 A kind of loading spectrum Extrapolation method based on shot and long term memory network

Publications (1)

Publication Number Publication Date
CN110263447A true CN110263447A (en) 2019-09-20

Family

ID=67921109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910551153.8A Pending CN110263447A (en) 2019-06-24 2019-06-24 A kind of loading spectrum Extrapolation method based on shot and long term memory network

Country Status (1)

Country Link
CN (1) CN110263447A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222199A (en) * 2019-11-13 2020-06-02 中国汽车工程研究院股份有限公司 Key index selection and equivalent calculation method during association of user and test field
CN112389436A (en) * 2020-11-25 2021-02-23 中汽院智能网联科技有限公司 Safety automatic driving track-changing planning method based on improved LSTM neural network
CN116663434A (en) * 2023-07-31 2023-08-29 江铃汽车股份有限公司 Whole vehicle load decomposition method based on LSTM deep neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407649A (en) * 2016-08-26 2017-02-15 中国矿业大学(北京) Onset time automatic picking method of microseismic signal on the basis of time-recursive neural network
CN106934184A (en) * 2017-04-25 2017-07-07 吉林大学 A kind of Digit Control Machine Tool load Extrapolation method based on the extension of time domain load
CN109558971A (en) * 2018-11-09 2019-04-02 河海大学 Intelligent landslide monitoring device and method based on LSTM shot and long term memory network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407649A (en) * 2016-08-26 2017-02-15 中国矿业大学(北京) Onset time automatic picking method of microseismic signal on the basis of time-recursive neural network
CN106934184A (en) * 2017-04-25 2017-07-07 吉林大学 A kind of Digit Control Machine Tool load Extrapolation method based on the extension of time domain load
CN109558971A (en) * 2018-11-09 2019-04-02 河海大学 Intelligent landslide monitoring device and method based on LSTM shot and long term memory network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
冯小平 等: "《通信对抗原理》", 31 August 2009, 西安电子科技大学出版社 *
刘彦龙 等: "基于挡位的汽车传动系载荷谱提取与外推", 《重庆理工大学学报( 自然科学)》 *
曾敬 等: "汽车试验场在场车辆总数趋势预测", 《汽车工程学报》 *
王斌杰 等: "基于载荷谱提升转向架构疲劳可靠性研究", 《铁道学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222199A (en) * 2019-11-13 2020-06-02 中国汽车工程研究院股份有限公司 Key index selection and equivalent calculation method during association of user and test field
CN111222199B (en) * 2019-11-13 2022-06-21 中国汽车工程研究院股份有限公司 Key index selection and equivalent calculation method during association of user and test field
CN112389436A (en) * 2020-11-25 2021-02-23 中汽院智能网联科技有限公司 Safety automatic driving track-changing planning method based on improved LSTM neural network
CN116663434A (en) * 2023-07-31 2023-08-29 江铃汽车股份有限公司 Whole vehicle load decomposition method based on LSTM deep neural network
CN116663434B (en) * 2023-07-31 2023-12-05 江铃汽车股份有限公司 Whole vehicle load decomposition method based on LSTM deep neural network

Similar Documents

Publication Publication Date Title
WO2022077587A1 (en) Data prediction method and apparatus, and terminal device
CN110163261A (en) Unbalanced data disaggregated model training method, device, equipment and storage medium
CN106022954B (en) Multiple BP neural network load prediction method based on grey correlation degree
CN103853786B (en) The optimization method and system of database parameter
CN110263447A (en) A kind of loading spectrum Extrapolation method based on shot and long term memory network
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN110321603A (en) A kind of depth calculation model for Fault Diagnosis of Aircraft Engine Gas Path
CN109002917A (en) Total output of grain multidimensional time-series prediction technique based on LSTM neural network
CN113743016B (en) Engine residual life prediction method based on self-encoder and echo state network
CN105808689A (en) Drainage system entity semantic similarity measurement method based on artificial neural network
CN113591954A (en) Filling method of missing time sequence data in industrial system
CN114220458B (en) Voice recognition method and device based on array hydrophone
CN103675914B (en) Use existing ground type earthquake instant analysis system and the method thereof of neural network
CN112700326A (en) Credit default prediction method for optimizing BP neural network based on Grey wolf algorithm
CN109101717A (en) Solid propellant rocket Reliability Prediction Method based on reality with the study of fuzzy data depth integration
CN113687433A (en) Bi-LSTM-based magnetotelluric signal denoising method and system
CN111898734A (en) NMR (nuclear magnetic resonance) relaxation time inversion method based on MLP (Multi-layer linear programming)
CN112883522A (en) Micro-grid dynamic equivalent modeling method based on GRU (generalized regression Unit) recurrent neural network
Hu et al. pRNN: A recurrent neural network based approach for customer churn prediction in telecommunication sector
CN116703464A (en) Electric automobile charging demand modeling method and device, electronic equipment and storage medium
CN109146055A (en) Modified particle swarm optimization method based on orthogonalizing experiments and artificial neural network
CN116542701A (en) Carbon price prediction method and system based on CNN-LSTM combination model
CN108647714A (en) Acquisition methods, terminal device and the medium of negative label weight
Regazzoni et al. A physics-informed multi-fidelity approach for the estimation of differential equations parameters in low-data or large-noise regimes
CN110533109A (en) A kind of storage spraying production monitoring data and characteristic analysis method and its device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920