CN112699550A - Cutting force neural network prediction model training method based on transfer learning - Google Patents

Cutting force neural network prediction model training method based on transfer learning Download PDF

Info

Publication number
CN112699550A
CN112699550A CN202011581691.0A CN202011581691A CN112699550A CN 112699550 A CN112699550 A CN 112699550A CN 202011581691 A CN202011581691 A CN 202011581691A CN 112699550 A CN112699550 A CN 112699550A
Authority
CN
China
Prior art keywords
cutting
training
cutting force
network
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011581691.0A
Other languages
Chinese (zh)
Inventor
邹斌
王俊成
丁宏建
黄传真
姚鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202011581691.0A priority Critical patent/CN112699550A/en
Publication of CN112699550A publication Critical patent/CN112699550A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a cutting force neural network prediction model training method based on transfer learning, which uses cutting force data of simulation and cutting experiments, firstly uses simulation samples to establish a neural network model, stores trained model parameters, firstly loads the model parameters trained by the simulation samples when the cutting experiment samples are used for training the neural network model, and fixes the parameters of the first half part of a network hidden layer, namely the parameters of the first half part cannot be changed in the training process, and finely adjusts the second half part of the hidden layer. Meanwhile, the maximum mean difference between simulation and cutting experiment samples is calculated and added into a loss function of the network to jointly form an optimization target of the network. The method can effectively improve the prediction precision of the neural network, reduce the requirement on the number of cutting experiment samples and has obvious effect.

Description

Cutting force neural network prediction model training method based on transfer learning
Technical Field
The invention relates to the technical field of intelligent manufacturing, in particular to a training method of a neural network cutting force prediction model for cutting machining.
Background
Because the influence factors of the cutting force in the metal cutting process are many, a complete cutting force prediction model is difficult to establish. Fortunately, the development of neural network technology provides a good solution to this problem. The neural network model can conveniently determine the implicit relation among a group of input and output parameters according to the data set, effectively captures the nonlinear relation among the input and output parameters, and has important significance for good cutting force prediction in the machining process. In the case of sufficient cutting data, the cutting force can be predicted by using a neural network without considering the influence of various factors, so that the method becomes an ideal method.
The application of the neural network is more and more extensive, and the mode of use is also more and more diversified. It is worth noting, however, that neural networks are not always advantageous. The accuracy of the neural network model depends mainly on the number and quality of the training data sets, which is a major drawback of this approach. In the conventional method, training data is often obtained by performing a machining experiment over the entire input parameter range. Because the processing cycle is long, the material cost is high, the machine tool maintenance cost is high, the actual processing process is involved, and the acquisition of a large number of data samples usually means higher financial and time costs, which limits the popularization of the neural network in the processing field.
At present, no method for reducing the requirement of a neural network prediction model on data volume exists in the field of cutting force prediction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a cutting force neural network prediction model training method based on transfer learning, so that the demand of a prediction model on training data volume can be reduced under the condition of meeting the use requirement, and the cost for establishing the prediction model is reduced.
The technical scheme of the invention is realized as follows: a cutting force neural network prediction model training method based on transfer learning comprises the following sequential steps
S1, respectively carrying out a simulation experiment and a cutting experiment by using a well-formulated cutting parameter combination, and collecting an original cutting force signal;
s2, processing the original cutting force signals collected in the step S1 respectively to obtain cutting force values corresponding to the cutting parameters;
s3, training a network by using a simulation sample and a cutting sample simultaneously, wherein the simulation sample is used for pre-training a neural network model, and the experimental sample is used for fine tuning of training network parameters;
s4, simultaneously calculating the MMD distance between the simulation sample and the experiment sample, wherein the calculation formula is as follows
Figure BDA0002865245450000021
Wherein the content of the first and second substances,
Figure BDA0002865245450000022
is a Gaussian kernel function
Figure BDA0002865245450000023
And S5, adding the MMD distance of the step S4 into a loss function of the network to jointly form a digested target of the network.
As a preferred embodiment, in step S1
S11, in a simulation experiment, under each group of parameters, the cutting process is the corresponding cutting distance when the cutter rotates by 100 degrees, the obtained cutting force signal is subjected to filtering treatment by using a low-pass filter with the filtering frequency of 8n and then is fitted by using a tenth polynomial, and an average value is taken after extreme values are taken for a plurality of cutting force wave crests to serve as the cutting force numerical value of the corresponding cutting parameter;
wherein n is the rotating speed of the main shaft and the unit is r/min.
As a preferred embodiment, in step S1
S12, in a cutting experiment, under each group of parameters, the cutting distance of a cutter is 50mm, the obtained cutting force signal is filtered by a low-pass filter with the filtering frequency of 5 xzxn ÷ 60, the corresponding cutter cut-in and cut-out part at two ends of the signal is cut off, the rest part is divided into a plurality of small sections at intervals of each rotation time, the maximum value is taken for a numerical point in each small section, and the average value of the maximum values in all the small sections is taken as the cutting force numerical value of the corresponding cutting parameter;
wherein n is the rotating speed of the main shaft, the unit is r/min, and z is the number of teeth of the cutter.
As a preferred embodiment, in step S3
S31, parameters of the simulation sample pre-training neural network model are reused when the experiment sample is used for training the network, parameters related to the first half part of a hidden layer of the pre-training network are fixed, and parameters related to the second half part of the hidden layer are subjected to fine adjustment.
As a preferred embodiment, in step S4
S41, Gaussian kernel function is combined
Figure BDA0002865245450000031
The sigma in (3) takes a plurality of values, respectively calculates kernel functions and then sums, and the sum is used as the final kernel function to calculate.
After the technical scheme is adopted, the invention has the beneficial effects that:
1. the invention can ensure that the performance of the neural network established by using the method is superior to that of the common network under the condition of the same training sample number, thereby reducing the experimental data volume required by predicting the cutting force by using the neural network to a certain extent.
2. The method provided by the invention has the most obvious performance advantage when only a small number of training samples are available, and can be used for establishing a neural network prediction model by using few training samples in occasions with low prediction precision requirements.
3. The prediction performance of the neural network model established by the method provided by the invention is gradually weakened along with the number of training samples, but the prediction error is always lower than that of the model established by the method.
4. The method provided by the invention has no negative influence on the prediction performance of the network under any training sample number, and is an effective choice when the sample number is uncertain and is enough to train a neural network model with excellent performance.
5. The method provided by the invention can effectively reduce the number of samples required for training the neural network prediction model, further reduce the cost of materials, equipment, cutters, time and the like, effectively improve the resource utilization rate and reduce the resource waste.
6. The method provided by the invention is suitable for various prediction tasks of the fully-connected neural network, and has a wide application range.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic view of a simulation and cutting experimental processing mode;
FIG. 2 is a schematic diagram of a simulation experiment process;
FIG. 3 is a flow chart of cutting force signal data processing obtained by simulation
FIG. 4 is a flow chart of cutting force signal data processing obtained from cutting experiment
FIG. 5 is a schematic diagram of a neural network training process proposed by the present invention
FIG. 6 is a comparison of error rates in X-axis force for a migrating network versus a conventional network
FIG. 7 is a comparison of the error rate in Y-axis force for a migration network versus a conventional network
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The cutting method of the exemplary embodiment of the present application is face milling, and the schematic drawing of the cutting method is shown in fig. 1, the workpiece material is aluminum alloy 2a14, and the tool is a solid carbide end mill with a diameter of phi 12 mm. The experimental cutting parameter ranges were determined as shown in table 1.
TABLE 1 ranges of cutting parameters
Cutting parameters RangeofValues
Rotating speed (r/min) 1500~3500
Feed per tooth (mm/r) 0.05~0.25
Radial depth of cut (mm) 0.5~1.7
Axial depth of cut (mm) 0.8~2.4
Cutting parameters are combined within a range, and 600 groups of simulation experiments and 400 groups of cutting experiments are designed. Simulation and cutting experiments were performed separately. The simulation experiment is carried out by using Third Wave AdvantEdge software, and the simulation process is shown in FIG. 2. Under each set of cutting parameters in the simulation, the cutting distance of the cutter is the corresponding distance when the cutter rotates by 100 degrees. The cutting experiment was performed in the university ACE-V500 machining center, using a Kistler9257B three-way dynamic piezoelectric force gauge to collect cutting force signals. In the experiment, the cutting distance of the cutter is about 50mm under each set of cutting parameters.
Taking an X-axis force cutting signal corresponding to cutting parameters of 1500r/min of spindle rotation speed, 0.15mm of feed per tooth, 1.1mm of radial depth and 1.6mm of axial depth as an example, a data processing flow of the cutting force signal of a simulation and cutting experiment is described as follows:
fig. 3(a) shows the tool and workpiece state at the end of the simulation. In the present embodiment, a complete simulation process is a process of rotating the tool by 100 °, two cutter teeth participate in the cutting process, and an example of the generated cutting force signal is shown in fig. 3 (b). It can be seen from the figure that the original force signal is more cluttered, the value fluctuates more greatly, and the peak value exceeds-500, which is quite different from the cutting force signal peak value obtained by experiment. Therefore, the cutting force signals obtained by simulation and experiment cannot be processed in the same way. Fig. 3(c) shows the cutting force signal after filtering with a low-pass filter with a filtering frequency of 8n (n is the spindle speed in r/min). As shown in fig. 3(c), the filtered signal is much improved compared to the original signal and there is still a large fluctuation in the value. The trough peak of the circular portion exceeds-100 and the average of the peak portion is estimated to be about-85. If the peak value is directly selected, the regularity of the extracted cutting force value is seriously impaired. Therefore, a tenth order polynomial is used to fit the filtered signal, and the fitted curve is shown in fig. 3(d), and the extreme values of the two peaks are taken from the fitted curve, and the average value is taken as the X-axis force value corresponding to the set of cutting parameters.
Cutting experiment data processing flow:
the raw cutting force signal obtained by the cutting test is shown in fig. 4 (a). First, the original signal needs to be filtered using a low-pass filter. The filter frequency is calculated as: 5 xz × n ÷ 60, where z denotes the number of tool teeth and n denotes the spindle speed in r/min, the filter frequency used here being 500. When the cutter goes in and out, the cutting force signal has certain fluctuation. Therefore, the signals of the cutting-in and cutting-out parts of the corresponding tools at two ends of the cutting force signal are cut off, the signal sections of the rest parts are divided by taking the time of each rotation of the tools as intervals, the maximum value of the numerical value points in each section is taken, the average value of all the maximum values is calculated, and the cutting force value of the X-axis force corresponding to the parameter is obtained.
The cutting force values extracted from the simulated and experimental cutting force signals are shown in table 2, for example:
table 2 sample examples of cutting force values
Figure BDA0002865245450000061
After the abnormal samples are removed, 467 sets of simulation samples and 400 sets of experimental samples are obtained in the embodiment. It can be seen from the data processing flow and the simulation and experiment results that the cutting force values obtained by the simulation and the experiment have certain differences. This difference is mainly due to the difference between the original simulation signal and the different data processing methods. The experimental signals are derived from data measured by the dynamometer, and the simulation signals are derived from calculation of simulation software. As can be seen from the raw signal curves of fig. 3(b) and fig. 4(a), the cutting force signals of the two are very different in amplitude and variation law. The difference in the original signals results in their inability to use the same processing method. The simulated raw signal needs to be filtered and fitted. In the process, the extracted simulated cutting force value is obviously smaller than the experimental cutting force value to a certain extent.
In this embodiment, the processing method of the simulation signal aims to ensure that the extracted cutting force value has strong regularity as much as possible, rather than emphasizing the consistency between the simulation value and the experimental cutting force value. The laws contained in the data are often important representatives of "knowledge". Therefore, in the data processing process, the extracted simulated cutting force value is taken care of, and the robust regularity is strong.
In cutting force data extracted in simulation and cutting experiments, each sample comprises seven dimensions including rotating speed, feed per tooth, axial cutting depth, radial cutting depth, X-axis force, Y-axis force and Z-axis force, wherein the first four are input units of a neural network, and the last three are output units. The construction process of the neural network is shown in fig. 5. Because the number of input and output units in the network is small, a neural network structure with four hidden layers is adopted. First, the neural network is pre-trained using 80% of the simulation samples, the network performance is verified using the remaining 20% of the simulation samples, and the parameters of the network are saved.
The parameters of the pre-trained network are then used as initial values to retrain them using the experimental samples. During retraining, the first two layers of the hidden layer are fixed, namely the parameters designed by the part are not changed during training, and the second two layers of the hidden layer are finely adjusted. The MMD distance of the simulation and experimental data is added to the loss function of the neural network. The optimization target of the whole network comprises the prediction error of experimental data and the discrimination error between two groups of samples, and can be written as follows:
Figure BDA0002865245450000071
wherein
Figure BDA0002865245450000072
And yiRespectively representing actual and predicted values, MKMMDe(Xs,Xt) Representing a multi-core MMD computation method between a source domain and a target domain.
The MMD distance formula for calculating the simulation sample and the experiment sample is as follows
Figure BDA0002865245450000074
Kernel function used in calculating MMD distance in this embodiment
Figure BDA0002865245450000075
Is a Gaussian kernel function
Figure BDA0002865245450000076
Respectively calculating sigma by 0.25,0.5,1,2 and 4The kernel functions are then summed and computed as the final kernel function.
The invention refers to the neural network model trained by the method as a migration network, the control group of the migration network is a traditional BP neural network which does not use a migration learning method and is only trained by experimental data, and the traditional BP neural network is referred to as a common network, and the optimization goal is as follows:
Figure BDA0002865245450000077
according to the invention, prediction models are respectively established by using different experimental sample numbers, and the effects of the method under different experimental sample numbers are compared. The following is the training process of the transmission net and the common net with "n" experimental samples as training sets:
developing a training process for describing the migration network:
(1) establishing a four-hidden-layer neural network and randomly initializing, wherein training is realized by a training set consisting of 80% of simulation samples, and the performance of the network is tested by the rest simulation samples.
(2) N samples were extracted from the training set constructed from the experimental samples.
(3) And training the network by using n experimental samples by taking the network pre-trained by the simulation data as an initial value, and storing a complete training model.
(4) And selecting different 'n' values, and repeating the steps.
(5) The error rate of each network trained using the experimental samples was evaluated on the test set.
Wherein n ∈ {5,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180, 190, 200 }. In addition to different optimization objectives, the common network and the transmission network are different in the training process. The training process of the common network is relatively simple, and the initial network structure is directly trained by an experimental sample. In the training process, the experimental samples used by the common network and the transmission network are consistent with parameters such as learning rate, iteration times and the like.
During the training of the migration network, the extraction of the experimental samples is not completely random. When n samples are used to create the training data set, the samples should be selected according to a certain rule. The first 200 samples of all experimental samples were used as training set and the last 200 samples were used as test set. Taking "n" samples means selecting the first n samples of the training set, and when n is selected to be 200, all training samples are used. Throughout the training process, the entire data set was randomly divided into five times. The initial value of the network often has a certain influence on the convergence result of the network. In order to eliminate the influence of the network initial value on the experimental result, the migration network and the normal network are respectively initialized randomly ten times in this embodiment. The initialization of the migration network refers to an initialization process of a pre-training network. That is, for each value of n, the transport and normal networks will each train 50 models. Then, the prediction accuracy of the training model is evaluated by using the corresponding test set.
The performance of the generic network and the transport network model is evaluated by the test set according to the training procedure described above. The two models of each group were evaluated on the test set according to the number of experimental samples used in the training and the prediction error was averaged. According to the experimental result, the influence of the transfer learning method on the prediction precision of the neural network model is verified. The comparative data for the two models are shown in fig. 6 and 7.
As can be seen from the comparison of the error rates of the migration network and the normal network, the migration network has different effects at different sample stages. When the number of training samples is less than or equal to 90, the migration network has obvious performance advantage, and the average error rate of the migration network is respectively 11.15% and 8.49% smaller than that of the common network in the X, Y axial force; when the number of training samples is greater than 100, the predicted performance of the migration network is not significantly different from that of the common network.
Within the range of 100-200 samples, the difference between the prediction error of the transmission network and the prediction error of the common network is less than 1%. This indicates that in this range, the transfer learning method has no significant impact on the performance of the neural network. As can be seen from the comparison of the prediction error rates within the range of 5-90, the average prediction error of the migration network is smaller than that of the common network, and the performance is obviously improved. In the range of 30-90 samples, the prediction error of the migration network on X, Y axial force is 8.84% and 5.91% smaller than that of the common network respectively, which is a suitable range of the migration network. In the sample range of 5-20, the average error at X, Y axial force was 16.57% and 14.49% less than that of the conventional network, although the performance improvement of the migration network was most significant, but the error rates of both the migration network and the conventional network were in a higher range overall. Therefore, in the range of 5 to 20, it is difficult for both the transmission network and the general network to satisfy the practical use requirements.
In the whole sample range, the prediction error of the migration network is mostly smaller than that of the common network, and the performance advantage is basically gradually reduced and finally disappears. The results show that the influence of the transfer learning method on the performance of the neural network is gradually reduced as the number of samples is increased. When the number of samples is greater than 100, the effect of the transfer learning becomes weak; when the number of samples is greater than 140, the method of the transfer learning is not effective at all.
From experimental data, the prediction error of the migration network and the common network is large in the range of 0-20, and the prediction model established in the range is not suitable for practical application. Within the range of 0-90, the performance of the migration network is obviously improved, and the performance of the migration network is improved to a certain extent compared with that of the common network. Within this range, the transfer learning method is generally most suitable. Within the range of 100-200, the prediction error which can be achieved by the model basically reaches the limit. Even if the training samples continue to increase, it is difficult to significantly improve the performance of the model. In addition, it is worth noting that the transition network has different performance advantages at different sample stages; but in any sample range, the migration learning method does not negatively affect the performance of the model. When the number of samples is not determined to enable the performance of the neural network before the model is built to be better, the method provided by the invention is a better choice.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A cutting force neural network prediction model training method based on transfer learning is characterized by comprising the following steps: comprises the following sequential steps
S1, respectively carrying out a simulation experiment and a cutting experiment by using a well-formulated cutting parameter combination, and collecting an original cutting force signal;
s2, processing the original cutting force signals collected in the step S1 respectively to obtain cutting force values corresponding to the cutting parameters;
s3, training a network by using a simulation sample and a cutting sample simultaneously, wherein the simulation sample is used for pre-training a neural network model, and the experimental sample is used for fine tuning of training network parameters;
s4, simultaneously calculating the MMD distance between the simulation sample and the experiment sample, wherein the calculation formula is as follows
Figure FDA0002865245440000011
Wherein the content of the first and second substances,
Figure FDA0002865245440000012
is a Gaussian kernel function
Figure FDA0002865245440000013
And S5, adding the MMD distance of the step S4 into a loss function of the network to jointly form a digested target of the network.
2. The method for training the cutting force neural network prediction model based on the transfer learning as claimed in claim 1, wherein: in step S1
S11, in a simulation experiment, under each group of parameters, the cutting process is the corresponding cutting distance when the cutter rotates by 100 degrees, the obtained cutting force signal is subjected to filtering treatment by using a low-pass filter with the filtering frequency of 8n and then is fitted by using a tenth polynomial, and an average value is taken after extreme values are taken for a plurality of cutting force wave crests to serve as the cutting force numerical value of the corresponding cutting parameter;
wherein n is the rotating speed of the main shaft and the unit is r/min.
3. The method for training the cutting force neural network prediction model based on the transfer learning as claimed in claim 1, wherein: in step S1
S12, in a cutting experiment, under each group of parameters, the cutting distance of a cutter is 50mm, the obtained cutting force signal is filtered by a low-pass filter with the filtering frequency of 5 xzxn ÷ 6, the corresponding cutter cut-in and cut-out part at two ends of the signal is cut off, the rest part is divided into a plurality of small sections at intervals of each rotation time, the maximum value is taken for a numerical point in each small section, and the average value of the maximum values in all the small sections is taken as the cutting force numerical value of the corresponding cutting parameter;
wherein n is the rotating speed of the main shaft, the unit is r/min, and z is the number of teeth of the cutter.
4. The method for training the cutting force neural network prediction model based on the transfer learning as claimed in claim 1, wherein: in step S3
S31, parameters of the simulation sample pre-training neural network model are reused when the experiment sample is used for training the network, parameters related to the first half part of a hidden layer of the pre-training network are fixed, and parameters related to the second half part of the hidden layer are subjected to fine adjustment.
5. The method for training the cutting force neural network prediction model based on the transfer learning as claimed in claim 1, wherein: in step S4
S41, Gaussian kernel function is combined
Figure FDA0002865245440000021
The sigma in (3) takes a plurality of values, respectively calculates kernel functions and then sums, and the sum is used as the final kernel function to calculate.
CN202011581691.0A 2020-12-28 2020-12-28 Cutting force neural network prediction model training method based on transfer learning Pending CN112699550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011581691.0A CN112699550A (en) 2020-12-28 2020-12-28 Cutting force neural network prediction model training method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011581691.0A CN112699550A (en) 2020-12-28 2020-12-28 Cutting force neural network prediction model training method based on transfer learning

Publications (1)

Publication Number Publication Date
CN112699550A true CN112699550A (en) 2021-04-23

Family

ID=75511252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011581691.0A Pending CN112699550A (en) 2020-12-28 2020-12-28 Cutting force neural network prediction model training method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112699550A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528870A (en) * 2020-12-14 2021-03-19 华侨大学 Multi-point vibration response prediction method based on MIMO neural network and transfer learning
CN115157236A (en) * 2022-05-30 2022-10-11 中国航发南方工业有限公司 Robot stiffness model precision modeling method, system, medium, equipment and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN111445147A (en) * 2020-03-27 2020-07-24 中北大学 Generative confrontation network model evaluation method for mechanical fault diagnosis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN111445147A (en) * 2020-03-27 2020-07-24 中北大学 Generative confrontation network model evaluation method for mechanical fault diagnosis

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JUNCHENG WANG等: "Milling force prediction model based on transfer learning and neural network", 《JOURNAL OF INTELLIGENT MANUFACTURING》 *
MUHAMMAD GHIFARY等: "Domain Adaptive Neural Networks for Object Recognition", 《ARXIV》 *
徐旭东,马立乾: "基于迁移学习和卷积神经网络的控制图识别", 《计算机应用》 *
王一全: "基于LSTM-DaNN的动力电池SOC估算方法", 《储能科学与技术》 *
范涛等: "基于深度学习的多模态融合网民情感识别研究", 《信息资源管理学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528870A (en) * 2020-12-14 2021-03-19 华侨大学 Multi-point vibration response prediction method based on MIMO neural network and transfer learning
CN112528870B (en) * 2020-12-14 2024-03-01 华侨大学 Multi-point vibration response prediction method based on MIMO neural network and transfer learning
CN115157236A (en) * 2022-05-30 2022-10-11 中国航发南方工业有限公司 Robot stiffness model precision modeling method, system, medium, equipment and terminal

Similar Documents

Publication Publication Date Title
CN112699550A (en) Cutting force neural network prediction model training method based on transfer learning
CN105512799B (en) Power system transient stability evaluation method based on mass online historical data
CN110647943B (en) Cutting tool wear monitoring method based on evolution data cluster analysis
CN111563301A (en) Thin-wall part milling parameter optimization method
CN113798920B (en) Cutter wear state monitoring method based on variational automatic encoder and extreme learning machine
CN109556863B (en) MSPAO-VMD-based large turntable bearing weak vibration signal acquisition and processing method
CN106112697A (en) A kind of milling parameter automatic alarm threshold setting method based on 3 σ criterions
CN106970593A (en) It is a kind of that the method that processing flutter suppresses online is realized by speed of mainshaft intelligent control
CN112288193A (en) Ocean station surface salinity prediction method based on GRU deep learning of attention mechanism
Liu et al. A hybrid health condition monitoring method in milling operations
CN113752089A (en) Cutter state monitoring method based on singularity Leersian index
CN113126564B (en) Digital twin driven numerical control milling cutter abrasion on-line monitoring method
CN113569353A (en) Reliability optimization method and device for micro-milling parameters and electronic equipment
CN117116291A (en) Sound signal processing method of sand-containing water flow impulse turbine
CN115922553A (en) Method for online monitoring polishing processing state of silicon carbide wafer
CN108956783B (en) HDP-HSMM-based grinding sound grinding wheel passivation state detection method
CN117066972A (en) Turbine blade root slot milling outlet burr prediction method based on data reasoning
CN109382702A (en) A kind of chain digital control gear hobbing machine rolling blade losing efficacy form automatic identifying method
CN107807526A (en) A kind of method for intelligently suppressing processing flutter based on Simulation of stability
CN111563543B (en) Method and device for cleaning wind speed-power data of wind turbine generator
CN114638265B (en) Milling chatter judging method based on signal convolution neural network
CN116401501A (en) Dredging operation leakage quantity prediction method and device, electronic equipment and medium
CN114714146A (en) Method for simultaneously predicting surface roughness and cutter abrasion
CN114077851A (en) FSVC-based ball mill working condition identification method
CN112035978B (en) Cutter parameter optimization design method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination