CN116992954A - UMAP data dimension reduction-based similarity measurement transfer learning method - Google Patents

UMAP data dimension reduction-based similarity measurement transfer learning method Download PDF

Info

Publication number
CN116992954A
CN116992954A CN202311246391.0A CN202311246391A CN116992954A CN 116992954 A CN116992954 A CN 116992954A CN 202311246391 A CN202311246391 A CN 202311246391A CN 116992954 A CN116992954 A CN 116992954A
Authority
CN
China
Prior art keywords
domain
dimension reduction
data
source
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311246391.0A
Other languages
Chinese (zh)
Inventor
赵正彩
张创
张磊
李尧
徐九华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202311246391.0A priority Critical patent/CN116992954A/en
Publication of CN116992954A publication Critical patent/CN116992954A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2131Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on a transform domain processing, e.g. wavelet transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a UMAP data dimension reduction-based similarity measurement transfer learning method, which comprises the following steps: collecting processing signals under four different working conditions; collecting the cutter abrasion loss under the working condition of a source; extracting time domain features, frequency domain features and wavelet decomposed time-frequency domain signal features, and performing dimension reduction on the extracted data features; the feature vector after the dimension reduction of the source working condition and the corresponding cutter abrasion loss form a source domain with a label; the feature vectors after three working condition and dimension reduction form a target domain without labels; dividing a target domain without labels into two parts, wherein one part and a source domain with labels form a training data set, and introducing the training data set into a similarity migration learning model for training; and the other part is led into a cutter abrasion loss prediction model after training, and cutter abrasion loss is obtained through prediction. According to the application, a migration learning method is adopted, and a new loss function is constructed to evaluate the distribution difference between the target domain and the source domain, so that the multiplexing of the prediction model under the new process condition is realized.

Description

UMAP data dimension reduction-based similarity measurement transfer learning method
Technical Field
The application relates to the technical field of machining, in particular to a UMAP data dimension reduction-based similarity measurement transfer learning method.
Background
The nickel-based superalloy has good oxidation resistance, corrosion resistance, creep resistance and excellent fatigue life at high temperatures, and is suitable for being used as an integral turbine disk material. However, good physical and mechanical properties mean poor cutting machining performance, large cutting force, low quality of machined surfaces and rapid tool wear in the cutting process of the nickel-base superalloy, and the tool reaches a grinding standard in the machining process, so that the machining precision of the whole turbine disk is insufficient and even scrapped.
In order to prevent the cutter from reaching the grinding standard in the processing process, the traditional method is to stop the machine tool and change the cutter in advance before the cutter reaches the service life, and the method can cause serious cutter waste, and the average service life of the cutter is 50% -80% of the service life of the cutter.
Monitoring the wear state of the cutter by an optical image method or a manual judgment method requires stopping the machine tool, and the method breaks against the intelligent manufacturing concept of automation and unmanned, and consumes a great deal of manpower and time. The development of real-time monitoring of cutter wear is promoted by the appearance of artificial intelligence, and a cutter monitoring model constructed by an artificial neural network, a support vector machine and hidden Markov is gradually applied to actual machining.
However, under new process conditions, the accuracy of the predictive model built based on the data from the original process conditions is reduced, the model fails, and retraining the predictive model lacks sufficient labeled samples.
Disclosure of Invention
The application aims to provide a similarity measurement transfer learning method based on UMAP (Uniform Manifold Approximation and Projection) data dimension reduction, which adopts a transfer learning method to construct a new loss function to evaluate the distribution difference between a target domain and a source domain, so as to realize multiplexing of a prediction model under new process conditions.
In order to achieve the technical purpose, the application adopts the following technical scheme:
a similarity measure transfer learning method based on UMAP data dimension reduction, the similarity measure transfer learning method comprising the following steps:
s1, collecting machining process signals including milling current and vibration signals under four different working conditions; the four different working conditions comprise a source working condition and three variable working conditions; acquiring cutter abrasion loss corresponding to different machining process signals under a source working condition through a three-dimensional video microscope;
s2, carrying out data cleaning on the processing signals of four different working conditions, extracting time domain features, frequency domain features and time-frequency domain signal features of wavelet decomposition on the cleaned processing signals, and then carrying out dimension reduction on the extracted data features; the feature vector after the dimension reduction of the source working condition and the corresponding cutter abrasion loss form a source domain with a label; the feature vectors after three working condition and dimension reduction form a target domain without labels;
s3, dividing the unlabeled target domain into two parts, wherein one part of the unlabeled target domain and the labeled source domain form a training data set, and importing the training data set into a similarity migration learning model for training to obtain a trained cutter abrasion loss prediction model; and (3) introducing the other part of unlabeled target domain into a trained cutter abrasion loss prediction model, and predicting to obtain cutter abrasion loss.
Further, in step S1, the three variable working conditions include a variable cutting speed working condition, a variable feed amount working condition and a variable cutting depth working condition.
Further, in step S2, the collected tool wear under the source working condition is set to beCleaning invalid values and abnormal values in the processing signals; extracting time domain features, frequency domain features and wavelet decomposed time-frequency domain features of the cleaned processing signals to extract m feature constitution feature vectors of the ith processing signalsN samples form a feature vector setReducing the dimension of X to obtain a feature vector set
The dimension reduction process specifically comprises the following steps:
s21, designing a conditional probability functionTo describe the distribution relationship between the high-dimensional sample points, two by two:
wherein ,is a feature vectorThe distance to the first nearest neighbor feature vector,is a feature vectorThe diameter of the nearest-neighbor feature vector,representing feature vectorsAnd feature vectorA Euclidean distance between them;
s22, constructing a conditional probability functionTo describe the distribution relationship between the low-dimensional sample points:
wherein ,andare all super parameters, by adjustingAndto adjust the gathering degree of the mapped low-dimensional data;
s23, for conditional probability functionAnd conditional probability functionAnd (3) carrying out symmetry treatment:
in the formula ,andrespectively given conditions ofiAndjthe probability of occurrence of X and Y events;
s24, constructing a loss function binary cross entropySolving by gradient descent methodThe minimum value of (2) enables the relation between the high-dimensional sample points and the relation between the low-dimensional sample points to be similar to obtain a feature vector set after dimension reductionWherein the dimension of vector y is less than m; loss function binary cross entropyThe formula of (2) is:
further, in step S2, the feature vector after the dimension reduction of the source working condition and the corresponding tool wear amount form a source domain with a labelThe method comprises the steps of carrying out a first treatment on the surface of the The three feature vectors after the variable working condition dimension reduction respectively form an unlabeled target domain:and
in the formula ,andthe characteristic vectors after dimension reduction corresponding to the ith processing process signal under the source working condition and the corresponding cutter abrasion loss are respectively obtained;the total number of samples under the source working condition;andthe feature vectors are feature vectors after dimension reduction corresponding to the ith processing process signal under three variable working conditions respectively;andthe total number of samples under three variable working conditions is respectively calculated.
Further, in step S3, the similarity migration learning model is constructed based on a variant LSTM model;
the calculation formula of the forgetting gate of the variant LSTM model is as follows:
the calculation formula of the addition gate of the variant LSTM model is as follows:
the calculation formula of the output gate of the variant LSTM model is as follows:
the forward propagation process of the variant LSTM model is as follows:
calculation of
Calculation of
Calculation of
wherein , wherein As a function of the sigmoid,the output at time t-1 is indicated,the input at time t is indicated,the output at the time t is indicated,andrepresenting the parameters required for the amnestic door training,andrepresenting the parameters required to increase the gate training,andrepresenting the parameters required for the output gate training,andrepresentation calculationThe parameters required for the training are set up,for the current state of the cell,is the cell state at the last moment;andoutput at time t respectively representing a forget gate, an increase gate and an output gate;a candidate value representing the cell state of the current time step t.
Further, in step S3, the loss function of the similarity migration learning model is:
in the formula ,a loss function is predicted for tool wear of the source domain data,andthe measured value and the predicted value of the wear amount of the ith cutter under the source working condition are obtained;for CORAL loss between source domain and target domain data features,is the feature vector dimension;andthe source domain and target threshold feature covariance matrices respectively,the Frobenius norm as a mean square proof;for a similarity measure between the source domain and the target domain,for the prediction output of the similarity transfer learning model, y is the feature vector after dimension reduction,to predict the expected output, z represents k thingsThe occurrence of the piece is made,in order for the parameters to be regularized,d is data type and D is data distribution;andthe weighting coefficients of the domain adaptation and similarity measures, respectively.
Further, in step S3, the loss function L is converged to be optimal through repeated iterative feedback of an Adam optimization algorithm, so as to obtain a trained cutter abrasion loss prediction model;
the iteration formula of the t-th iteration is as follows:
wherein ,the first order of the motion term is represented,the exponential decay rate representing the first moment estimate, takes a value of 0.9,representing a first order momentum correction value,represents the second-order motion term,the exponential decay rate of the second moment estimation is represented, the value is 0.999,representing a second order momentum correction value,representation ofTo the power of t of (2),representation ofTo the power t;the weights for the t-th iteration are represented,gradient values representing the t-th iteration;representing a learning rate;is super-parameter for preventingZero.
Further, in step S3, the unlabeled target domain is divided into two parts according to a ratio of 80% and 20%, wherein the unlabeled target domain and the labeled source domain with the ratio of 80% form a training data set.
Compared with the prior art, the application has the following beneficial effects:
according to the UMAP data dimension reduction-based similarity measurement transfer learning method, a new loss function is constructed by adopting the transfer learning method to evaluate the distribution difference between a target domain and a source domain, so that multiplexing of a prediction model under new process conditions is realized.
Drawings
FIG. 1 is a parameter diagram of four conditions;
FIG. 2 is a signal data acquisition flow chart;
FIG. 3 is a UMAP data dimension reduction visualization;
FIG. 4 is a diagram of a variant LSTM structure;
FIG. 5 is a flowchart of the Adam algorithm;
fig. 6 is a flowchart of a similarity measurement transfer learning method based on UMAP data dimension reduction in the present application.
Detailed Description
Embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 6, the application discloses a similarity measurement transfer learning method based on UMAP data dimension reduction, which comprises the following steps:
s1, collecting machining process signals including milling current and vibration signals under four different working conditions; the four different working conditions comprise a source working condition and three variable working conditions; acquiring cutter abrasion loss corresponding to different machining process signals under a source working condition through a three-dimensional video microscope;
s2, carrying out data cleaning on the processing signals of four different working conditions, extracting time domain features, frequency domain features and time-frequency domain signal features of wavelet decomposition on the cleaned processing signals, and then carrying out dimension reduction on the extracted data features; the feature vector after the dimension reduction of the source working condition and the corresponding cutter abrasion loss form a source domain with a label; the feature vectors after three working condition and dimension reduction form a target domain without labels;
s3, dividing the unlabeled target domain into two parts, wherein one part of the unlabeled target domain and the labeled source domain form a training data set, and importing the training data set into a similarity migration learning model for training to obtain a trained cutter abrasion loss prediction model; and (3) introducing the other part of unlabeled target domain into a trained cutter abrasion loss prediction model, and predicting to obtain cutter abrasion loss.
The UMAP data dimension reduction-based similarity measurement transfer learning method comprises three parts, namely data acquisition and preprocessing, model training and wear prediction, wherein the data acquisition and preprocessing part comprises the following steps: collecting milling current, vibration signals and corresponding cutter abrasion loss under four different working conditions, performing data cleaning and feature extraction operation on the processing signals of the four working conditions, performing dimension reduction on data features by UMAP, constructing a labeled source domain and a label-free target domain data set, dividing 20% of data of the label-free target domain data set for cutter abrasion loss prediction, and sending the rest part and the labeled source domain into a training data set for model training by a model training module; the model training section includes: constructing a variant LSTM model to initialize parameters, and carrying out iterative computation on a loss function L through an Adam optimizer until the loss function L converges to the optimal value to obtain a trained model; the wear predicting section obtains the wear amount of the tool with 20% data divided from the unlabeled target area as an input amount.
In the data acquisition and preprocessing module, current and vibration signals under different milling working conditions are acquired through a machine tool communication protocol and a sensor, the acquisition mode is shown in fig. 2, fig. 1 is a schematic diagram of acquisition data, the four acquisition working conditions are respectively a source working condition, a variable cutting speed, a variable feeding amount and a variable cutting depth, and the acquisition signals are processing current and vibration signals. And the corresponding cutter abrasion loss under the working condition A is acquired through a three-dimensional video microscope and recorded asIn this embodiment, the tool wear amount acquired by the three-dimensional video microscope is the flank wear value VB. Then cleaning invalid and abnormal values in the processing signal, in particular, by screening the originalThe smooth cutting process signal in the processing signal achieves the purposes of processing the original processing signal and cleaning useless data. Extracting time domain features, frequency domain features and wavelet decomposed time-frequency domain signal features of the cleaned process signals to obtain m feature component feature vectorsN samples constituteThen the UMAP data dimension reduction means reduces the dimension of X to obtain
The specific steps of dimension reduction are as follows:
first, a conditional probability function is designedTo describe the distribution relationship between the high-dimensional sample points, two by two:
wherein ,is taken as a pointThe distance to the first nearest neighbor data point,is thatThe diameter of the data point was recently ordered.Representation ofAnd (3) withEuclidean distance between two vectors.
Due toIs not a symmetric function and therefore needs to be symmetric:
secondly, constructing a conditional probability functionTo describe the distribution relationship between the low-dimensional sample points:
wherein Andis super-parametric by adjustingAndthe degree of convergence of the mapped low-dimensional data can be adjusted. In this embodiment, a=2.71 and b=0.66 can be obtained.
Finally constructing the loss function binary cross entropyBy gradient descent methodTo the minimum of the high-dimensional sample points and the low-dimensional sample pointsThe relationship between the two components is similar as much as possible, and the dimension reduction is finally obtainedWherein the dimension of vector y is less than m. The formula of the loss function binary cross entropy is:
fig. 3 is a visual result of the UMAP dimension reduction result of the source operating mode processing data.
The feature vector after the dimension reduction of the source working condition and the corresponding cutter abrasion loss form a source domain with a labelThe method comprises the steps of carrying out a first treatment on the surface of the The feature vectors after three working condition and dimension reduction are label-free target domains respectivelyAndand
the model training module sends the unlabeled target and the divided test data set, the rest forms the training data set with the labeled source domain, and the training data set is sent to a similarity migration learning model for training, the learning model is a variant LSTM model, the structure of the variant LSTM model is shown in fig. 4, and compared with the traditional LSTM model, the variant LSTM model allows the portal layer to also receive the input of the cell state by connecting the cell state and the portal layer, in particular, the cell is in the last moment stateThe method also comprises the steps of entering a forgetting gate, adding a gate and participating in operation in an output gate.
Firstly, a forgetting door is formed, and the calculation formula is as follows:
the following is an addition gate, whose calculation formula is as follows:
finally, an output gate is provided, the calculation formula of which is as follows:
the forward propagation process is as follows:
the first step: calculation of
And a second step of: calculation of:
And a third step of: calculation of:
wherein , wherein As a function of the sigmoid,the output at time t-1 is indicated,the input at time t is indicated,the output at the time t is indicated,andrepresenting the parameters required for the amnestic door training,andrepresenting the parameters required to increase the gate training,andrepresenting the parameters required for the output gate training,andrepresentation calculationThe parameters required for the training are set up,for the current state of the cell,is the cell state at the last moment;andoutput at time t respectively representing a forget gate, an increase gate and an output gate;a candidate value representing the cell state of the current time step t.
In order to realize multi-working-condition cutter abrasion prediction, the distribution difference between a source domain and a target domain is reduced through edge distribution adaptation, the similarity between the source domain and the target domain is increased, and the prediction effect of the target domain is further improved, so that the overwrite loss function is divided into three parts.
Taking migration of a source condition to one of the conditions as an example:
1) Source domain data tool wear prediction loss:
2) CORAL loss between source domain and target domain data features:
in the formula :andthe source domain and target threshold feature covariance matrices respectively,is the mean square exemplified Frobenius norm.
3) Similarity measure between source domain and target domain.
Order theFor predictive output of the model, the similarity of the central feature and the input feature is maximized, namely the intra-class distance of the feature is minimizedThe formula is as follows:
in the formula :to predict the desire for output;is a regularization parameter;data types; d is the data distribution. In combination with the above, the overwrite loss function is:
in the formula ,andweighting coefficients for domain adaptation and similarity measures.
In order to realize multi-working-condition cutter wear prediction, the distribution difference between the source domain and the target domain is reduced through edge distribution adaptation, the similarity between the source domain and the target domain is increased, and the prediction effect of the target domain is further improved, so that the overwrite loss function is divided into three parts, namely cutter wear prediction loss of source domain data, CORAL loss between source domain and target domain data characteristics and similarity measurement between the source domain and the target domain.
And continuously iterating and feeding back through an Adam optimization algorithm, losing the function value until the function value is converged to the optimal value, and further obtaining a trained model. Each iteration formula of Adam is as follows:
wherein ,the first order of the motion term is represented,the exponential decay rate, representing the first moment estimate, typically takes a value of 0.9,representing a first order momentum correction value,represents the second-order motion term,the exponential decay rate, which represents the second moment estimate, typically takes on a value of 0.999,representing the second order momentum correction value.The weights for the t-th iteration are represented,gradient values representing the t-th iteration;representing a learning rate;is super-parametric, preventAt zero, the iterative process is shown in fig. 5.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The scheme in the embodiment of the application can be realized by adopting various computer languages, such as object-oriented programming language Java, an transliteration script language JavaScript and the like.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. The UMAP data dimension reduction-based similarity measurement transfer learning method is characterized by comprising the following steps of:
s1, collecting machining process signals including milling current and vibration signals under four different working conditions; the four different working conditions comprise a source working condition and three variable working conditions; acquiring cutter abrasion loss corresponding to different machining process signals under a source working condition through a three-dimensional video microscope;
s2, carrying out data cleaning on the processing signals of four different working conditions, extracting time domain features, frequency domain features and time-frequency domain signal features of wavelet decomposition on the cleaned processing signals, and then carrying out dimension reduction on the extracted data features; the feature vector after the dimension reduction of the source working condition and the corresponding cutter abrasion loss form a source domain with a label; the feature vectors after three working condition and dimension reduction form a target domain without labels;
s3, dividing the unlabeled target domain into two parts, wherein one part of the unlabeled target domain and the labeled source domain form a training data set, and importing the training data set into a similarity migration learning model for training to obtain a trained cutter abrasion loss prediction model; and (3) introducing the other part of unlabeled target domain into a trained cutter abrasion loss prediction model, and predicting to obtain cutter abrasion loss.
2. The UMAP data dimension reduction-based similarity metric transfer learning method of claim 1, wherein in step S1, the three variable conditions include a variable cutting speed condition, a variable feed amount condition, and a variable cutting depth condition.
3. The method for learning the migration of the similarity measure based on the dimension reduction of UMAP data according to claim 1, wherein in step S2, the acquired cutter wear under the source working condition is set as followsCleaning invalid values and abnormal values in the processing signals; extracting time domain features, frequency domain features and wavelet decomposed time-frequency domain features of the cleaned processing signals to extract m feature constitution feature vectors of the ith processing signals>N samples constitute a feature vector set +.>Reducing the dimension of X to obtain a feature vector set
The dimension reduction process specifically comprises the following steps:
s21, designing a conditional probability functionTo describe a high-dimensional sampleDistribution relation between points and two by two:
wherein ,is a feature vector +>Distance to the first nearest neighbor feature vector, < >>Is a feature vector +.>Diameter of nearest neighbor feature vector, +_>Representing feature vector +.>And feature vector->A Euclidean distance between them;
s22, constructing a conditional probability functionTo describe the distribution relationship between the low-dimensional sample points:
wherein , and />Are all super parameters, by adjusting ∈ -> and />To adjust the gathering degree of the mapped low-dimensional data;
s23, for conditional probability functionAnd conditional probability function->And (3) carrying out symmetry treatment:
in the formula , and />Respectively given conditions ofiAndjthe probability of occurrence of X and Y events;
s24, constructing a loss function binary cross entropySolving for +.>The minimum value of (2) is that the relation between the high-dimensional sample points and the relation between the low-dimensional sample points are similar to each other, and the feature vector set +.>Wherein the dimension of vector y is less than m; loss function binary cross entropyThe formula of (2) is:
4. the method for learning and migrating similarity measurement based on UMAP data dimension reduction according to claim 1, wherein in step S2, the feature vector after dimension reduction of the source working condition and the corresponding tool wear amount form a labeled source domainThe method comprises the steps of carrying out a first treatment on the surface of the The three feature vectors after the variable working condition dimension reduction respectively form an unlabeled target domain:、/> and />
in the formula , and />The characteristic vectors after dimension reduction corresponding to the ith processing process signal under the source working condition and the corresponding cutter abrasion loss are respectively obtained; />The total number of samples under the source working condition; />、/> and />The feature vectors are feature vectors after dimension reduction corresponding to the ith processing process signal under three variable working conditions respectively; />、/> and />The total number of samples under three variable working conditions is respectively calculated.
5. The UMAP data dimension-reduction-based similarity metric transfer learning method of claim 1, wherein in step S3, the similarity transfer learning model is constructed based on a variant LSTM model;
the calculation formula of the forgetting gate of the variant LSTM model is as follows:
the calculation formula of the addition gate of the variant LSTM model is as follows:
the calculation formula of the output gate of the variant LSTM model is as follows:
the forward propagation process of the variant LSTM model is as follows:
calculation of
Calculation of
Calculation of
wherein , wherein For sigmoid function, +.>Represents the output at time t-1, +.>Input representing time t,/->Represents the output at time t-> and />Parameters required for amnestic door training are indicated, < +.> and />Representing the addition of parameters required for door training, +.> and />Representing the parameters required for outputting the gate training, < +.> and />Representing the calculation->The parameters required for the training are set up,for the current cell state, +.>Is the cell state at the last moment; />、/> and />Respectively representing forgetful doorIncreasing the output of the gate and the output gate at time t; />A candidate value representing the cell state of the current time step t.
6. The UMAP data dimension-reduction-based similarity metric transfer learning method of claim 1, wherein in step S3, the loss function of the similarity transfer learning model is:
in the formula ,predicting a loss function for tool wear for source domain data, < >>,/> and />The measured value and the predicted value of the wear amount of the ith cutter under the source working condition are obtained; />For CORAL loss between source domain and target domain data features,/for the source domain and target domain data features>,/>Is the feature vector dimension; /> and />Source domain and target threshold feature covariance matrices, respectively,>the Frobenius norm as a mean square proof; />For the similarity measure between source domain and target domain,/for example>;/>For the prediction output of the similarity transfer learning model, y is the feature vector after dimension reduction, and ++>For predicting the expected output, z represents the occurrence of k event, -/->For regularization parameters, ++>D is data type and D is data distribution; /> and />The weighting coefficients of the domain adaptation and similarity measures, respectively.
7. The UMAP data dimension reduction-based similarity measurement transfer learning method of claim 1, wherein in step S3, the loss function L is converged to be optimal through iterative feedback of an Adam optimization algorithm for a plurality of times, and a trained cutter abrasion loss prediction model is obtained;
the iteration formula of the t-th iteration is as follows:
wherein ,representing first order motion term,/->An exponential decay rate representing a first moment estimate, takes a value of 0.9,/o>Representing a first order momentum correction value,/>Representing the second order momentum term,/->An exponential decay rate representing a second moment estimate, takes on a value of 0.999, < >>Representing a second order momentum correction value,/, for>Representation->To the power t of>Representation->To the power t; />The weights for the t-th iteration are represented,gradient values representing the t-th iteration; />Representing a learning rate; />Is super parameter for preventing->Zero.
8. The method for learning by migration of a similarity measure based on dimension reduction of UMAP data according to claim 1, wherein in step S3, the unlabeled target domain is divided into two parts according to a ratio of 80% and 20%, wherein the unlabeled target domain and the labeled source domain with the ratio of 80% constitute a training data set.
CN202311246391.0A 2023-09-26 2023-09-26 UMAP data dimension reduction-based similarity measurement transfer learning method Pending CN116992954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311246391.0A CN116992954A (en) 2023-09-26 2023-09-26 UMAP data dimension reduction-based similarity measurement transfer learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311246391.0A CN116992954A (en) 2023-09-26 2023-09-26 UMAP data dimension reduction-based similarity measurement transfer learning method

Publications (1)

Publication Number Publication Date
CN116992954A true CN116992954A (en) 2023-11-03

Family

ID=88521691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311246391.0A Pending CN116992954A (en) 2023-09-26 2023-09-26 UMAP data dimension reduction-based similarity measurement transfer learning method

Country Status (1)

Country Link
CN (1) CN116992954A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111687689A (en) * 2020-06-23 2020-09-22 重庆大学 Cutter wear state prediction method and device based on LSTM and CNN
CN113935375A (en) * 2021-10-13 2022-01-14 哈尔滨理工大学 High-speed electric spindle fault identification method based on UMAP dimension reduction algorithm
CN114048568A (en) * 2021-11-17 2022-02-15 大连理工大学 Rotating machine fault diagnosis method based on multi-source migration fusion contraction framework
US20220327035A1 (en) * 2020-06-03 2022-10-13 Soochow University Intra-class adaptation fault diagnosis method for bearing under variable working conditions
CN115351601A (en) * 2022-09-29 2022-11-18 哈尔滨工业大学 Tool wear monitoring method based on transfer learning
US20230184041A1 (en) * 2020-05-14 2023-06-15 Taurex Drill Bits, LLC Wear classification with machine learning for well tools
CN116619136A (en) * 2023-05-31 2023-08-22 北部湾大学 Multi-working-condition multi-source data cutter abrasion prediction method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230184041A1 (en) * 2020-05-14 2023-06-15 Taurex Drill Bits, LLC Wear classification with machine learning for well tools
US20220327035A1 (en) * 2020-06-03 2022-10-13 Soochow University Intra-class adaptation fault diagnosis method for bearing under variable working conditions
CN111687689A (en) * 2020-06-23 2020-09-22 重庆大学 Cutter wear state prediction method and device based on LSTM and CNN
CN113935375A (en) * 2021-10-13 2022-01-14 哈尔滨理工大学 High-speed electric spindle fault identification method based on UMAP dimension reduction algorithm
CN114048568A (en) * 2021-11-17 2022-02-15 大连理工大学 Rotating machine fault diagnosis method based on multi-source migration fusion contraction framework
CN115351601A (en) * 2022-09-29 2022-11-18 哈尔滨工业大学 Tool wear monitoring method based on transfer learning
CN116619136A (en) * 2023-05-31 2023-08-22 北部湾大学 Multi-working-condition multi-source data cutter abrasion prediction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DIEDERIK P. KINGMA等: "ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION", 《ARXIV:1412.6980V9》, pages 2 *
徐易芸等: "基于相似性度量迁移学习的轴承故障诊断", 《振动与冲击》, vol. 41, no. 16, pages 219 - 220 *
武林: "基于长短时记忆网络的空压机故障诊断系统研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 8, pages 028 - 141 *
薛晓倩: "数控机床铣刀磨损状态预测技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 3, pages 022 - 1430 *

Similar Documents

Publication Publication Date Title
CN111633467B (en) Cutter wear state monitoring method based on one-dimensional depth convolution automatic encoder
CN113798920B (en) Cutter wear state monitoring method based on variational automatic encoder and extreme learning machine
CN110163429B (en) Short-term load prediction method based on similarity day optimization screening
CN112085252B (en) Anti-fact prediction method for set type decision effect
CN114619292B (en) Milling cutter wear monitoring method based on fusion of wavelet denoising and attention mechanism with GRU network
CN111126255A (en) Numerical control machine tool cutter wear value prediction method based on deep learning regression algorithm
CN111814728A (en) Method for recognizing wear state of cutting tool of numerical control machine tool and storage medium
CN112434891A (en) Method for predicting solar irradiance time sequence based on WCNN-ALSTM
CN115204035A (en) Generator set operation parameter prediction method and device based on multi-scale time sequence data fusion model and storage medium
CN114282443A (en) Residual service life prediction method based on MLP-LSTM supervised joint model
CN114218872A (en) Method for predicting remaining service life based on DBN-LSTM semi-supervised joint model
CN114297912A (en) Tool wear prediction method based on deep learning
Chen et al. Lightweight Convolutional Transformers Enhanced Meta Learning for Compound Fault Diagnosis of Industrial Robot
CN116975645A (en) Industrial process soft measurement modeling method based on VAE-MRCNN
CN116384244A (en) Electromagnetic field prediction method based on physical enhancement neural network
Zhou et al. Time-varying Online Transfer Learning for Intelligent Bearing Fault Diagnosis with Incomplete Unlabeled Target Data
CN114152442A (en) Rolling bearing cross-working condition fault detection method based on migration convolutional neural network
CN115394381B (en) High-entropy alloy hardness prediction method and device based on machine learning and two-step data expansion
CN116992954A (en) UMAP data dimension reduction-based similarity measurement transfer learning method
CN113835964B (en) Cloud data center server energy consumption prediction method based on small sample learning
CN114330089B (en) Rare earth element content change prediction method and system
Zhang et al. Machine Tools Thermal Error Modeling with Imbalanced Data Based on Transfer Learning
CN112749807A (en) Quantum state chromatography method based on generative model
CN111882441A (en) User prediction interpretation Treeshap method based on financial product recommendation scene
CN116910579A (en) Variable working condition machining chatter monitoring method based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination