CN117648533A - Thin-walled workpiece processing deformation prediction method based on transfer learning - Google Patents

Thin-walled workpiece processing deformation prediction method based on transfer learning Download PDF

Info

Publication number
CN117648533A
CN117648533A CN202311618604.8A CN202311618604A CN117648533A CN 117648533 A CN117648533 A CN 117648533A CN 202311618604 A CN202311618604 A CN 202311618604A CN 117648533 A CN117648533 A CN 117648533A
Authority
CN
China
Prior art keywords
function
training
model
data
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311618604.8A
Other languages
Chinese (zh)
Inventor
冯睽睽
张发平
王武宏
吴贞鹤
张梦迪
王彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202311618604.8A priority Critical patent/CN117648533A/en
Publication of CN117648533A publication Critical patent/CN117648533A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a thin-wall part processing deformation prediction method based on transfer learning, and belongs to the field of data driving modeling. The implementation method of the invention comprises the following steps: selecting two sections of collected small sample data of processing deformation, respectively serving as a source domain and a target domain of a model, and converting the source domain and the target domain data into image data; model training is carried out on the source domain image data according to the ResNet-50 neural network, and a characteristic function and training weight are constructed; migrating model parameters obtained by training on a source domain to a target domain, and carrying out multi-strategy model parameter correction according to target domain image data to obtain a plurality of groups of training models; the training model with stronger accuracy and robustness is screened to obtain the optimal migration learning strategy, so that the high-efficiency and accurate prediction of the machining deformation of the thin-wall part is realized.

Description

Thin-walled workpiece processing deformation prediction method based on transfer learning
Technical Field
The invention relates to a method for predicting machining deformation of a thin-wall part, in particular to a method for predicting machining deformation based on a data driving model.
Background
The thin-wall machined parts are deformed by bending, twisting, dislocation and the like from the off-line state of the clamp release, so that the final form of the geometric features and the clamp before release deviate, the specified precision requirements cannot be met, and the yield of the parts is further affected. Therefore, the accurate prediction of the processing deformation of the part has important significance for optimizing the process parameters and improving the quality of the workpiece.
Studies have shown that the inducing factors of the processing deformation are residual stresses such as initial residual stress and processing residual stress. With the release of the workpiece holder, the balance of residual stress is broken, resulting in the overall deformation of the workpiece, requiring a long time of aging to rebalance. Thus, many scholars construct the deformation mechanism of the workpiece from the standpoint of residual stress characterization. However, this deformation prediction method based on the mechanism driving model has two disadvantages: firstly, the uncertain items in the processing process are excessive, and the calculation efficiency is low; secondly, the existing measurement method cannot accurately measure the residual stress in a large area, which increases the difficulty of model verification.
In order to solve the problems, a deformation prediction method based on a data driving model is widely applied, and the method takes monitoring data related to deformation as input and establishes a mapping relation with unknown geometric deformation data. Although the data driving method omits various assumptions and verifications in the mechanism model, and reasonably and effectively predicts the processing deformation by using an advanced data analysis method, a plurality of difficulties exist, such as obvious deformation of the thin-wall part within 72 hours after the processing is finished, the offline stage is quite long, a large amount of monitoring data is required to be used as support for ensuring the prediction capability of the model, the difficulty of data acquisition in actual processing is improved, and the time cost of model training is increased. Therefore, reducing the model input quantity on the basis of ensuring the model prediction capability is a key for rapidly and accurately predicting the processing deformation.
Disclosure of Invention
Aiming at the defects of the existing processing deformation prediction method based on data driving, the main purpose of the invention is to provide a thin-wall part processing deformation prediction method based on transfer learning. Firstly, selecting two sections of collected small sample data of processing deformation, respectively serving as a source domain and a target domain of a model, and converting the source domain and the target domain data into image data; then model training is carried out on the source domain image data according to the ResNet-50 neural network, and a characteristic function and training weight are constructed; then, model parameters obtained through training on a source domain are migrated to a target domain, and multi-strategy model parameter correction is carried out according to target domain image data, so that a plurality of groups of training models are obtained; and finally, screening a training model with stronger accuracy and robustness to acquire an optimal migration learning strategy, thereby realizing efficient and accurate prediction of the machining deformation of the thin-wall part.
The aim of the invention is achieved by the following technical scheme.
The invention discloses a thin-wall part processing deformation prediction method based on transfer learning, which comprises the following steps:
step one, selecting two pieces of small sample data from actually collected thin-wall piece offline state time sequence deformation data, dividing the data into a source domain and a target domain according to time sequence, and converting the source domain and the target domain data into image data. Because of the nonlinear unsteady state change relation of the data and the resolution problem of the image in time and frequency, the conversion method of the source domain and the target domain can be divided into three steps: normalization processing, extremum domain mean modal decomposition and time-frequency transformation.
Step 1.1: normalization process
The method for carrying out standardized processing on the source domain and the target domain data comprises the following steps:
in { x } 1 ,···,x i ,···,x n And the y (t) is the standardized source domain and target domain data sets. Y (t) is taken as an initializing function.
Step 1.2: polar region mean modal decomposition
(1) And finding out all extremum values of the initialization function to form an extremum value set of the function.
(2) According to the integral median theorem, calculating the local mean value of any adjacent extreme points, fitting the local mean value function by adopting cubic spline interpolation, and defining the difference value between the initializing function and the local mean value function as an error function.
(3) The characteristics of the error function are determined. If the error function is an eigenmode function, separating the eigenmode function from the initializing function, updating the rest local mean function into the initializing function, and executing the step (4); if the error function is not the eigenmode function, updating the error function to an initialization function, and jumping back to the step (1).
The basis for the determination of the eigenmode function can be expressed as:
where g (T) is an initialization function, m (T) is a local mean function, T is the total length of the time sequence, and Δt is the time sequence interval.
(4) And finding out the extreme points of the updated initialization function. If the number of the extreme points is not less than 2, jumping back to the step (1); and if the number of the extreme points is less than 2, taking the updated initialization function as a residual function.
Steps (1) to (4) are completed, where y (t) is decomposed into a plurality of eigenmode functions and a residual function:
in c i (t) is the ith eigenmode function of the decomposition, delta (t) is the residual function, and I is the number of eigenmode functions.
Step 1.3: time-frequency conversion
(1) And respectively carrying out Hilbert transformation on all the eigen mode functions to obtain corresponding amplitude functions and frequency functions. And the residual function is truncated in the transformation as a feature independent quantity. Thus, the time domain functions of both the source domain and the target domain can be expressed as:
in which A i (t) and ω i (t) is the magnitude function and the frequency function of the ith eigenmode function.
(2) The conversion of a time domain function into a frequency domain function by fourier transformation can be expressed as:
and (4) and (5) are combined to obtain a time-frequency diagram of the source domain and the target domain, namely the image data.
Step two, building a ResNet-50 neural network training model so as to train source domain image data, wherein the process can be divided into three steps: defining a residual function, constructing a network structure and training image data.
Step 2.1: defining a residual function
In order to solve the degradation problem caused by the increase of the layer number in the network structure, a residual block is formed by a plurality of convolution layers of ResNet-50, and a model is trained in an identical transformation mode. Residual blocks are divided into two classes of equal-dimension and dimension reduction, and residual functions can be expressed as follows:
wherein x is input, W is a transformation matrix, H (·) is an operation function, W i Is the weight of the i-th residual block. At this time, the training target of the defined model is F (x) →0, so that the added layer network is transformed under identity.
Step 2.2: building network structure
(1) The first convolution layer performs convolution, regularization, activation functions, and max-pooling operations.
(2) The 2 nd to 49 th convolution layers form four residual block layers, which are marked as residual block layers I, I and I V. Each residual block layer comprises 1 dimension-reducing residual block and 2-5 same dimension residual blocks, and after the residual block layers are trained, a model residual matrix can be expressed as:
where σ is the activation function.
(3) The last convolutional layer performs an averaging pooling and full join operation, converts the residual matrix into eigenvectors, and calculates eigenvectors.
Step 2.3: image data training
Taking the image data as the input of a first convolution layer, executing the operation of a network structure, and finally outputting model parameters through multiple iterations: feature distribution and weight.
And thirdly, taking the first 80% of the data of the target domain image data as a training set and the other 20% of the data as a test set. And adopting different migration learning strategies, and correcting model parameters obtained by training on a source domain according to the training set. According to the network structure trained by the model, the migration learning strategy can be divided into five types.
Strategy 1: training all network layers, and correcting the weight of each network layer;
strategy 2: the weight of the first convolution layer is kept unchanged, and the rest network layers are trained and correct the weight;
strategy 3: the weights of the first convolution layer and the residual block layer I are kept unchanged, and the rest network layers are trained and corrected;
strategy 4: the weights of the first convolution layer, the residual block layers I and I are kept unchanged, and the rest network layers are put into training to correct the weights;
strategy 5: the last convolution layer is trained, the weights are corrected, and the weights of the rest network layers are kept unchanged.
Model parameters are corrected according to strategies 1-5 respectively to obtain five corresponding training models, subsequent processing deformation is predicted respectively, and the five corresponding training models are compared with a test set to obtain the accuracy of the five training models respectively.
Step four: comparing the accuracy of each group of training models, screening out models with accuracy exceeding 95% after iteration is completed, and calculating the fluctuation rate of the screened models, wherein the fluctuation rate is expressed as follows:
wherein R is the fluctuation rate, Q j And J is the total iteration number, which is the accuracy of the jth iteration.
And comparing the fluctuation rate of the screened models, wherein the model with the minimum fluctuation rate is a training model with stronger accuracy and robustness, and the migration learning strategy is optimal. And predicting the processing deformation of the thin-wall parts of the same material type by adopting an optimal migration learning strategy.
The beneficial effects are that:
1. the invention discloses a thin-wall part processing deformation prediction method based on transfer learning. On the premise that the offline stage of the thin-wall part is too long and the monitoring data is difficult to acquire in a large range, a training model with stronger accuracy and robustness is constructed by adopting a transfer learning method, so that the processing deformation of the thin-wall part can be rapidly and accurately predicted in a small sample state.
2. The invention discloses a thin-wall part processing deformation prediction method based on transfer learning. On the basis of the beneficial effect 1, the mapping relation between two sections of small sample data is constructed through multi-strategy model parameter correction, and a theoretical basis is provided for exploring the machining deformation mechanism of the thin-wall part.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a view of a thin-walled part of the present invention; FIG. (a) is a detail drawing; fig. (b) is a top view;
FIG. 3 is a plot of the position of a thin-walled part measurement of the present invention;
FIG. 4 is a Gantt chart of sample data of source domain and target domain of the present invention;
FIG. 5 is a time-frequency processing diagram of the present invention;
FIG. 6 is a flowchart of the ResNet-50 neural network algorithm of the present invention;
FIG. 7 is a graph of model iteration accuracy for various strategy modes of the present invention; FIG. (a) is the model iteration accuracy of M1; graph (b) is model iteration accuracy of M2; graph (c) is the model iteration accuracy of M3; graph (d) is the model iteration accuracy of M4; graph (e) is model iteration accuracy of M5; graph (f) is the model iteration accuracy of M6;
FIG. 8 is a graph of model final accuracy for various strategy modes of the present invention; FIG. (a) is the final accuracy of the model for M1; graph (b) is the model final accuracy of M2; graph (c) is the model final accuracy of M3; graph (d) is the model final accuracy of M4; graph (e) is the model final accuracy of M5; graph (f) is the model final accuracy of M6;
fig. 9 is a graph of model iteration accuracy for M7 of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples in order to better explain the problem addressed by the invention.
As shown in fig. 1, the present example relates to a thin-walled workpiece processing deformation prediction method based on transfer learning. Firstly, selecting two sections of small sample data of the acquired processing deformation as a source domain and a target domain of a model respectively, and converting the source domain and the target domain data into image data; then, model training is carried out on the source domain image data according to the ResNet-50 neural network, and a feature function and training weight are constructed; then, transferring model parameters obtained by training on a source domain to a target domain, and carrying out multi-strategy model parameter correction according to target domain image data to obtain a plurality of groups of training models; and finally, screening out a training model with stronger accuracy and robustness so as to realize efficient and accurate prediction of the processing deformation.
The experimental object used in this example is a typical thin-walled part of an aerospace engine, and the dimensional parameters are shown in fig. 2.
Step one: according to the measurement positions shown in fig. 3, the deformation amounts of each point within 72h of the offline are respectively acquired, and data sets M1 to M6 are obtained. The measurement time length of M1-M6 is equally divided into 12 parts, two small sample data are randomly selected as a source domain and a target domain, and a Gantt chart of the selected samples is shown in FIG. 4.
Standardized processing is carried out on the source domain and the target domain data, and the processed source domain and target domain data are respectively marked as X S1 ~X S6 And X T1 ~X T6 . Data X S1 ~X S6 And X T1 ~X T6 Performing extremum domain mean modal decomposition and time-frequency transformation to obtain image data Y of source domain and target domain S1 ~Y S6 And Y T1 ~Y T6 The extremum domain mean modal decomposition and time-frequency transformation are shown in fig. 5.
Step two: training image data Y according to the ResNet-50 neural network algorithm flow shown in FIG. 6 S1 ~Y S6 And outputting model parameters.
Step three: from image data Y T1 ~Y T6 The first 80% of data is selected as the training set, and the other 20% of data is selected as the test set. According to the method described in the migration learning strategies 1-5, the model parameters are corrected through the training set, the subsequent processing deformation is predicted, and the model iteration process accuracy is obtained by comparing the model parameters with the test set, as shown in fig. 7.
As can be seen from fig. 7, the fluctuation rules of the same strategy model in the points M1 to M6 have high similarity, and it is explained that the main factor affecting the model fluctuation rate is the migration learning strategy. The accuracy of each model at the 50 th iteration is intercepted as the final accuracy of the model, as shown in fig. 8.
As can be seen from fig. 8, the accuracy of strategies 1-2 is highest, both exceeds 95%, and as the number of layers to be trained is smaller, the accuracy of strategies 3-5 is gradually reduced, but the reduction degree of each point is different. The source domain and the target domain of the points M1 and M3 have overlong time sequence intervals, the model has higher accuracy under the correction of the strategies 1 and 2, and the model is greatly reduced from the strategies 3 to 4 until the trend of the strategy 5 is slowed down, which indicates that the deformation mechanism of the part is changed after the longer time sequence intervals, the characteristic distribution functions are obviously different, and the weight of a plurality of network layers needs to be corrected.
However, there is no longer timing interval between the source and target domains of point M5, and the accuracy trend is very similar to that of points M1 and M3, indicating that the part deformation mechanism does not change step by step, but changes suddenly in a short time. The time period of the sample selected by the point M6 is very close to the point M5, but the model accuracy is greatly different, which indicates that the abrupt change stage of the part deformation mechanism can be basically locked in the offline 12-18 h.
After this stage of change, the previous training model is no longer applicable, and the weight parameters need to be corrected at this time to update the feature distribution so as to improve the accuracy of the model. The model accuracy of the checking points M1, M3 and M5 under different strategies is greatly reduced, and the accuracy of the models of strategies 3 to 4 is greatly reduced, so that the model accuracy of residual block layers I and I plays an important role.
The time periods of samples selected by the points M2 and M4 are 12-18 hours after offline, but both have higher model accuracy under different strategies, which indicates that the deformation mechanism of the part after offline for 18 hours is similar.
Step four: the model with an accuracy rate exceeding 95% is selected from table 1 as the model of strategy 1-2 of point M1, strategy 1-5 of point M2, strategy 1-2 of point M3, strategy 1-5 of point M4, strategy 1-2 of point M5, and strategy 1-3 of point M6, respectively. The fluctuation ratio of the model was calculated from equation (8), as shown in table 1.
TABLE 1 model volatility
The model with the smallest fluctuation rate in the points M1 to M6 is selected to be a strategy 2 model of the point M1, a strategy 4 model of the point M2, a strategy 2 model of the point M3, a strategy 4 model of the point M4, a strategy 2 model of the point M5 and a strategy 3 model of the point M6.
In summary, if the acquired data before the offline 12h is taken as the source domain, the migration learning model of the strategy 2 has stronger accuracy and robustness; if the acquired data in 12-18 h offline is taken as a source domain, the migration learning model of the strategy 3 has stronger accuracy and robustness; if the acquired data after the offline 18h is taken as a source domain, the migration learning model of the strategy 4 has stronger accuracy and robustness.
In order to verify the correctness of the conclusion, experimental processing is carried out by adopting thin-wall parts of the same material type, any point M7 on the thin-wall parts is selected, processing deformation of the M7 in offline for 0-6 h and 12-18 h is measured, and two sections of measured data are respectively used as source domain data and target domain data. And predicting the part machining deformation by adopting a transfer learning strategy 2 according to the time sequence range of the source domain data to obtain the model iteration accuracy of M7, as shown in figure 9.
From fig. 9, it can be seen that the final accuracy of the model reaches 97.41%, and the accuracy requirement of 95% is met. Therefore, according to the migration learning strategy in the conclusion, accurate and efficient prediction of the machining deformation of the thin-wall part under the condition of small sample data can be realized.
The above example only shows one embodiment of the invention, the size of the thin-wall part is 40×20×5mm, the wall thickness is only 1mm, the material is TC4 titanium alloy, and the thin-wall part is still applicable to other thin-wall parts. Within the framework of the inventive idea, all modifications made fall within the scope of protection of the invention.

Claims (3)

1. A thin-wall part processing deformation prediction method based on transfer learning is characterized by comprising the following steps of: comprises the following steps of the method,
selecting two sections of small sample data from actually collected thin-wall piece offline state time sequence deformation data, dividing the data into a source domain and a target domain according to time sequence, and converting the source domain and the target domain data into image data to obtain source domain image data and target domain image data;
step two, building a ResNet-50 neural network training model, training source domain image data, and obtaining model parameters; the model parameters include: feature distribution and weight;
step three, taking the first 80% of the data of the target domain image data as a training set and the other 20% of the data as a test set; adopting different migration learning strategies, and correcting the model parameters obtained in the step two through a training set;
the migration learning strategies are divided into five types;
strategy 1: training all network layers, and correcting the weight of each network layer;
strategy 2: the weight of the first convolution layer is kept unchanged, and the rest network layers are trained and correct the weight;
strategy 3: the weights of the first convolution layer and the residual block layer I are kept unchanged, and the rest network layers are trained and corrected;
strategy 4: the weights of the first convolution layer, the residual block layers I and I are kept unchanged, and the rest network layers are put into training to correct the weights;
strategy 5: the last convolution layer is put into training, the weight is corrected, and the weights of the rest network layers are kept unchanged;
correcting model parameters according to strategies 1-5 to obtain five corresponding training models, respectively predicting subsequent processing deformation, and comparing with a test set to obtain the accuracy of the five training models;
step four: comparing the accuracy of each group of training models, screening out models with accuracy exceeding 95% after iteration is completed, and calculating the fluctuation rate of the screened models, wherein the fluctuation rate is expressed as follows:
wherein R is the fluctuation rate, Q j The accuracy of the jth iteration is J, and the J is the total iteration number;
comparing the fluctuation rate of the screened models, wherein the model with the minimum fluctuation rate is a training model with stronger accuracy and robustness, and the migration learning strategy is optimal; and predicting the processing deformation of the thin-wall parts of the same material type by adopting an optimal migration learning strategy.
2. The thin-walled workpiece processing deformation prediction method based on transfer learning according to claim 1, wherein the method comprises the following steps: the method for converting the source domain data and the target domain data into the image data in the first step is that,
step 1.1: normalization process
The method for carrying out standardized processing on the source domain and the target domain data comprises the following steps:
in { x } 1 ,···,x i ,···,x n And the y (t) is the standardized source domain and target domain data sets. Taking y (t) as an initializing function;
step 1.2: polar region mean modal decomposition
(1) Finding out all extremum of the initialization function to form an extremum set of the function;
(2) Calculating local mean values of any adjacent extreme points according to an integral median theorem, fitting a local mean value function by adopting cubic spline interpolation, and defining a difference value between an initialization function and the local mean value function as an error function;
(3) Determining the characteristics of the error function; if the error function is an eigenmode function, separating the eigenmode function from the initializing function, updating the rest local mean function into the initializing function, and executing the step (4); if the error function is not the eigenmode function, updating the error function to an initialization function, and jumping back to the step (1);
the basis for the determination of the eigenmode function is expressed as:
wherein g (T) is an initializing function, m (T) is a local mean function, T is a time sequence total length, and Deltat is a time sequence interval;
(4) Finding out the extreme point of the updated initialization function; if the number of the extreme points is not less than 2, jumping back to the step (1); if the number of the extreme points is less than 2, taking the updated initialization function as a residual function;
steps (1) to (4) are completed, where y (t) is decomposed into a plurality of eigenmode functions and a residual function:
in c i (t) is the ith eigenmode function of decomposition, delta (t) is a residual function, and I is the number of eigenmode functions;
step 1.3: time-frequency conversion
(1) Respectively carrying out Hilbert transformation on all the eigen mode functions to obtain corresponding amplitude functions and frequency functions; and the residual function is truncated in the transformation as a feature independent quantity; thus, the time domain functions of both the source domain and the target domain are expressed as:
in which A i (t) and ω i (t) is the magnitude function and the frequency function of the ith eigenmode function;
(2) Converting the time domain function into a frequency domain function by fourier transformation:
and (4) and (5) are combined to obtain a time-frequency diagram of the source domain and the target domain, namely the image data.
3. The thin-walled workpiece processing deformation prediction method based on transfer learning according to claim 2, wherein the method comprises the following steps: the implementation method of the second step is that,
step 2.1: defining a residual function
Forming a residual block by a plurality of convolution layers of ResNet-50, and training a model in an identical transformation mode; residual blocks are divided into two classes of same-dimension and dimension reduction, and residual functions are expressed as follows:
wherein x is input, W is a transformation matrix, H (·) is an operation function, W i Weights for the i-th residual block; at this time, the training target of the defined model is F (x) -0, so that the layering networks are all in identical transformation;
step 2.2: building network structure
(1) The first convolution layer performs convolution, regularization, activation functions and max-pooling operations;
(2) The 2 nd to 49 th convolution layers form four residual block layers, which are marked as residual block layers I, I and I V; each residual block layer comprises 1 dimension-reducing residual block and 2-5 same dimension residual blocks, and after the residual block layers are trained, a model residual matrix is expressed as:
wherein sigma is an activation function;
(3) The last convolution layer executes the operations of average pooling and full connection, converts the residual matrix into feature vectors, and calculates the feature vectors;
step 2.3: image data training
Taking the image data as the input of a first convolution layer, executing the operation of a network structure, and finally outputting model parameters through multiple iterations: feature distribution and weight.
CN202311618604.8A 2023-11-30 2023-11-30 Thin-walled workpiece processing deformation prediction method based on transfer learning Pending CN117648533A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311618604.8A CN117648533A (en) 2023-11-30 2023-11-30 Thin-walled workpiece processing deformation prediction method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311618604.8A CN117648533A (en) 2023-11-30 2023-11-30 Thin-walled workpiece processing deformation prediction method based on transfer learning

Publications (1)

Publication Number Publication Date
CN117648533A true CN117648533A (en) 2024-03-05

Family

ID=90044553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311618604.8A Pending CN117648533A (en) 2023-11-30 2023-11-30 Thin-walled workpiece processing deformation prediction method based on transfer learning

Country Status (1)

Country Link
CN (1) CN117648533A (en)

Similar Documents

Publication Publication Date Title
CN106651089B (en) Modeling and optimization solving method for distribution set robust model of production scheduling problem
CN113111547A (en) Frequency domain finite element model correction method based on reduced basis
CN108960334B (en) Multi-sensor data weighting fusion method
CN110442911B (en) High-dimensional complex system uncertainty analysis method based on statistical machine learning
CN112926152B (en) Digital twin-driven thin-wall part clamping force precise control and optimization method
CN110728088A (en) Method and device for optimizing transfer station parameters of tracker for three-dimensional thermal expansion deformation of workpiece
CN112597610B (en) Optimization method, device and equipment for lightweight design of mechanical arm structure
CN114567288B (en) Distribution collaborative nonlinear system state estimation method based on variable decibels
CN109599866B (en) Prediction-assisted power system state estimation method
CN113343559B (en) Reliability analysis method for response surface of iterative reweighted least square method extreme learning machine
CN112651153B (en) Method for determining material parameters of crystal plasticity finite element model
CN117648533A (en) Thin-walled workpiece processing deformation prediction method based on transfer learning
CN112700050A (en) Method and system for predicting ultra-short-term 1 st point power of photovoltaic power station
CN110909492A (en) Sewage treatment process soft measurement method based on extreme gradient lifting algorithm
CN109635452B (en) Efficient multimodal random uncertainty analysis method
CN107436957A (en) A kind of chaos polynomial construction method
CN115018162A (en) Method and system for predicting machining quality in industrial finish machining process in real time
WO2022012416A1 (en) Method and system for eccentric load error correction
CN108563856B (en) Self-adaptive sampling method based on free node B spline modeling
CN112580223B (en) Bimetal interface self-adaptive loading simulation method based on selective area stress criterion
CN114186477A (en) Elman neural network-based orbit prediction algorithm
CN110321650B (en) Structural reliability analysis method based on novel test design and weight response surface
CN111210877B (en) Method and device for deducing physical parameters
CN109711030B (en) Finite element model correction method based on incomplete data
CN112330046A (en) Power demand prediction method based on multi-dimensional gray-neural network hybrid coordination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination