CN116881765A - End mill wear state identification method based on deep learning - Google Patents
End mill wear state identification method based on deep learning Download PDFInfo
- Publication number
- CN116881765A CN116881765A CN202310684765.0A CN202310684765A CN116881765A CN 116881765 A CN116881765 A CN 116881765A CN 202310684765 A CN202310684765 A CN 202310684765A CN 116881765 A CN116881765 A CN 116881765A
- Authority
- CN
- China
- Prior art keywords
- model
- encoder
- input
- data
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013135 deep learning Methods 0.000 title claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 36
- 238000005299 abrasion Methods 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 238000009826 distribution Methods 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 16
- 238000012544 monitoring process Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000003801 milling Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000000691 measurement method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 238000003754 machining Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 241000935974 Paralichthys dentatus Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 239000000956 alloy Substances 0.000 description 1
- 229910045601 alloy Inorganic materials 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002173 cutting fluid Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000002285 radioactive effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying the wearing state of an end mill based on deep learning, which comprises the following steps: collecting feeding current and preprocessing an image sample to be used as the input of a DCVAE model; performing unsupervised training by using a DCVAE model, performing feature extraction on sample data, and storing low-dimensional hidden feature information in a potential space, wherein an encoder is added to enhance the capability of extracting data features, and feature distribution is further concentrated by using constraint conditions; in the supervised training stage, the extracted low-dimensional hidden characteristic information is input into an ELM model, and the comparison and classification of the low-dimensional hidden characteristic information are completed by using the label information and the ELM model, so that the classification and identification of different abrasion states of the cutter are realized. And inputting the test set samples in the data set into the DCVAE-ELM network model which is trained, so as to obtain the recognition results of different wear states of the cutter.
Description
Technical Field
The invention belongs to the technical field of high-speed milling, relates to the problem of cutter wear state identification, relates to a method for identifying the wear state of an end mill based on deep learning, and particularly relates to a method for identifying the wear state of a cutter based on a depth constraint variation self-encoder and an extreme learning machine (Deep Constrained Variational Auto Encoder-ExtremeLearning Machine, DCVAE-ELM).
Background
Critical elements in a machine device play a decisive role in the efficiency of the machine operation, and also in the service life and failure rate of the whole device. The cutter is used as an important executive component in the production process of key elements, the sharpness of the cutter is indispensible from the dimensional accuracy and the surface quality of the elements, and the abrasion phenomenon of the cutter is unavoidable. Studies have shown that tool wear or breakage is a major cause of increased production costs and reduced machining efficiency. Therefore, in the actual production process, the method has important significance in effectively and intelligently monitoring and diagnosing the cutter abrasion state.
The monitoring of the wear state of the cutter can be divided into two main types, namely a direct monitoring method and an indirect monitoring method. The direct monitoring method is to directly measure the tool by means of an instrument, and mainly comprises a radioactive element method, an optical image method and a contact resistance measurement method. Although the direct measurement method has high measurement accuracy and is more visual, the direct measurement method can only perform off-line measurement and is easy to influence, so that the measurement result is inaccurate, meanwhile, the production time is prolonged, the production efficiency is further influenced, and the method is not suitable for practical application. Thus, researchers are more inclined to the study of indirect monitoring methods.
The indirect measurement method is to collect various signals such as force, current, vibration, sound pressure, sound emission and the like by using different sensors, and to perform related feature extraction on single signal or multi-signal fusion by using different signal processing methods, so as to establish a mapping relation between feature information and a cutter abrasion state. The indirect measurement method commonly used at present is mainly divided into two major categories, namely traditional machine learning and deep learning. The traditional machine learning method comprises a support vector machine (Support Vector Machine, SVM), an artificial neural network (Artificial Neural Network, ANN), a Fuzzy C-means Algorithm (FCM) and the like. Although these methods are widely used and have good recognition effects, there are also many disadvantages: the selection and processing of sample data is heavily dependent on the expertise and experience of the technician; the model is easy to generate local optimum in the calculation process and the convergence speed is not easy to control; manual feature extraction is easy to lose feature information and cannot mine deep features of data, which are non-negligible problems. Compared with the traditional machine learning, the deep learning can efficiently process a large amount of sample data, can overcome the problems that a network model is easy to generate local optimum and cannot discover deep features of data during calculation, can avoid the possibility of losing feature information and interference on the feature information caused by technicians during information processing, and simultaneously shows superior processing capacity in signal processing and feature recognition, so that the deep learning is widely applied by virtue of strong capacity of processing complex problems.
In order to be able to reflect the tool wear indirectly, many scholars try to use different types of signals. The acceleration sensor is widely used for collecting vibration signals due to the advantages of convenience in installation, reliable performance, high precision, low price and the like, but is easy to be interfered in the actual environment and has certain requirements on the installation position. Cutting force signals are most sensitive to the wear state of the tool, but the load cell equipment is expensive and difficult to install, making the signals impractical. The acoustic emission signal and the sound pressure signal are susceptible to disturbance, although they may reflect tool wear. The feeding current signal can reflect the force change of the cutter in the cutting process, so that the change of the abrasion state of the cutter is shown, and meanwhile, the current sensor is convenient to install and does not interfere with machining, so that the practical application prospect is good.
Disclosure of Invention
In order to solve the problems of the traditional machine learning, the invention provides a cutter abrasion state identification method based on a depth constraint variation self-encoder and an extreme learning machine. Comprising the following steps: collecting and preprocessing image samples of the feed current as input of a depth constraint variation self-encoder model (DCVAE); performing unsupervised training by using a DCVAE model, performing feature extraction on sample data, and storing low-dimensional hidden feature information in a potential space, wherein an encoder is added to enhance the capability of extracting data features, and feature distribution is further concentrated by using constraint conditions; in the supervised training stage, the extracted low-dimensional hidden characteristic information is input into an extreme learning machine model (ELM), and the comparison and classification of the low-dimensional hidden characteristic information are completed by using the label information and the ELM model, so that the classification and identification of different wear states of the cutter are realized; and finally, inputting a test set sample in the data set into the DCVAE-ELM network model which is trained, so as to obtain the recognition results of different wear states of the cutter. The method can lead technicians to get rid of dependence on professional knowledge and judgment experience, adaptively extracts the characteristic information about the cutter abrasion state in the feeding current data, and has the characteristics of high identification precision and strong applicability.
The technical scheme of the invention is as follows:
an end mill wear state identification method based on deep learning comprises the following steps:
step 1: acquisition and pretreatment of feed current
A Hall current clamp sensor is used for collecting three-phase alternating current signals of a feeding motor of the numerical control machine; synthesizing the three-phase alternating current signals into current effective values and performing standardized treatment; then making the normalized current effective value signal into an image sample; dividing an image sample according to different cutter abrasion states to obtain data sets in different abrasion states, wherein the data set in each abrasion state comprises a training set and a testing set;
step 2: constructing a network model of a depth constraint variation self-encoder and an extreme learning machine
The network model consists of a depth constraint variation self-encoder and an extreme learning machine; wherein the depth constraint variation self-encoder is composed of an encoder, a hidden layer and a decoder; the extreme learning machine is composed of an input layer, a hidden layer and an output layer.
The depth constraint variation realizes the feature extraction of input data from an encoder in the encoder; the hidden layer samples according to the extracted characteristic information to obtain a hidden variable; the decoder reconstructs the input data by hiding the variables; the encoder learns the input data to obtain posterior distribution of the input data, the hidden layer samples the characteristic information in the posterior distribution to obtain hidden variables, the hidden variables are input into the decoder to reconstruct, so that the input data is restored, the low-dimensional characteristic information is extracted, and the hidden variables are used as input layer data of the extreme learning machine. The hidden layer of the extreme learning machine classifies the low-dimensional characteristic information in the input layer, so that an output result of the whole model is generated.
Step 3: model training
The training process of the whole model is divided into two stages of unsupervised training and supervised training.
In the non-supervision training stage, input sample data is used as a target value, a loss function and accuracy are used as references, the non-supervision training is carried out on the depth constraint variation self-encoder model built in the step 2, and the loss function is expressed as:
wherein ELBO is the loss function, X is the raw data,reconstruction data, m represents the dimension of the hidden variable, μ (i) Is a component of a normally distributed mean vector, +.>Is a component of a normally distributed variance vector.
In the supervised training stage, the low-dimensional characteristic information extracted from the encoder model by the depth constraint variation and the supervised completed label information are used as input of the extreme learning machine model together, and the extreme learning machine model is trained by taking the accuracy as a reference.
In the model training process, the function loss value and the recognition accuracy recorded after multiple iterations are used as the judgment basis of the model recognition effect. After training is finished, selecting the model with the minimum function loss value and highest recognition accuracy as the final model.
Step 4: monitoring by adopting the final model obtained in the step 3
Collecting and preprocessing feed current data in a step 1 mode to obtain an image sample; the image sample is input into the final network model obtained in the step 3 to identify the abrasion state of the cutter.
The beneficial effects of the invention are as follows: the depth constraint variation self-encoder and extreme learning machine network model are used for adaptively extracting the characteristic information about the tool wear state in the sample data, so that the complicated process of signal processing is effectively avoided, and the dependence on professional knowledge and judgment experience in the traditional tool wear state monitoring technology is eliminated. The method has multi-level characteristic extraction capability, and improves the accuracy of identifying the abrasion state of the cutter. The method can avoid complicated pretreatment means, gets rid of dependence on professional knowledge and technical experience, can efficiently and accurately identify the abrasion state of the cutter, and has the characteristics of high identification precision and strong robustness.
Drawings
FIG. 1 is a flow chart of a method for identifying the wear state of a cutter based on a depth constraint variation self-encoder and an extreme learning machine.
Fig. 2 is a block diagram of a depth constraint variation self-encoder.
Fig. 3 is a block diagram of the structure of the extreme learning machine.
Fig. 4a-1 and 4a-2 are data collected by a sensor, wherein fig. 4a-1 and 4a-2 are milling force waveforms and feed current effective value waveforms, respectively.
Fig. 4b-1, 4b-2 and 4b-3 are graphs of feed current versus milling force signals in the x, y and z directions, respectively, collected by the OPCUA client inside the numerically controlled machine tool.
Fig. 4c-1 and 4c-2 are graphs of spectral analysis of data collected by a sensor, respectively milling force and feed current effective values.
Fig. 5 is a graph of the identification of the tool feed current dataset on the network model.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the technical scheme and the accompanying drawings.
In this embodiment, fig. 1 shows a flowchart of a method for identifying a tool wear state based on a depth constraint variation self-encoder and an extreme learning machine, which includes the following steps:
step 1: acquisition and preprocessing of feed current data
A Hall current clamp sensor is used for collecting three-phase alternating current signals of a feeding motor of the numerical control machine; synthesizing the three-phase alternating current signals into current effective values and performing standardized treatment; then making the normalized current effective value signal into an image sample; dividing an image sample according to different cutter abrasion states to obtain data sets in different abrasion states, wherein the data set in each abrasion state comprises a training set and a testing set;
specifically, the numerical control machine tool used in the embodiment is a three-axis milling machine of the model GMC1020L, the spindle transmission adopts two-stage gear speed change transmission, and the siemens 828D numerical control system is adopted, and the version number is v04.07+sp07+hf01. The feed current signal was collected using a hall current clamp sensor model i400S manufactured by Fluke corporation. The cutter is a 4-edge hard alloy flat end mill with the diameter of 16mm, the rotating speed of a main shaft during processing is 380rpm, and the feeding speed is 55mm/min. The machining method is slot milling, and since the tool is full-cut, a cutting fluid is required to be used in the machining process. The workpiece material is FA520B, and the sampling frequency of the acquisition card is 10.24k when the feeding current signal is acquired.
The collected feeding three-phase current signal i u ,i v ,i w Fused into three-phase current effective value I rms The specific formula is as follows:
effective value of current I rms Value normalization to [0, 255]The specific formula of the interval is as follows:
in which I max Is the maximum value of the effective value of the current, I min Is the minimum value of the effective value of the current. The normalized feed current effective value is converted into a matrix, and then the matrix is written into an image to make an image sample.
The cutter is artificially divided into 3 different abrasion states, the number of samples in the different abrasion states is 75, 225 samples are obtained in total, and each sample contains 57600 data. Two-thirds of the dataset, i.e. 150 samples, were taken as training set for training the network model. The remaining 75 samples were used as test sets for final testing of model recognition accuracy. The specific data set partitioning is shown in table 1.
Table 1 data set partitioning
Step 2: constructing a depth constraint variation self-encoder and an extreme learning machine network model
As shown in fig. 2, the depth constraint variation is composed of three parts, namely an encoder, a hidden layer and a decoder. The encoder in the depth constraint variation self-encoder comprises an image input layer, a convolution layer, a ReLu activation layer and a full connection layer; the hidden layer comprises an encoder and a sampling layer; the decoder includes an image input layer, a transpose convolution layer, and a ReLu activation layer. After the encoder performs feature extraction on the image sample, feature information is input into a sampling layer, hidden variables are obtained through constraint conditions and sampling operation, and the decoder realizes the restoration of input data according to the hidden variables. And through multiple iterations, the characteristic information in the sample data is fully mined. The extreme learning machine calculates the low-dimensional characteristic information in the hidden layer, and performs comparison and classification by using the label information, so that an output result of the whole model is generated. In this embodiment, the number of filters used in the convolution layer is 64, the size is [3 3], and the step size is [ 22 ]; the transposed convolutional layer uses a number of filters of 64, size [7 7], step size [3 4].
As shown in fig. 3, the extreme learning machine model is composed of an input layer, a hidden layer, and an output layer. The mathematical model expression is as follows:
wherein X is i For input matrix, g (x) is the activation function, W j =[w j1 ,w j2 ,…,w jn ] T Is the weight matrix between the input layer and the hidden layer, i.e. the input weight, beta j =[β j1 ,β j2 ,…,β jm ] T Is the weight matrix between the hidden layer and the output layer, i.e. the output weight, b j Is the deviation of the j-th unit of the hidden layer, W j ·X i Is W j And X i Inner product of (y) i Is the actual output matrix.
Step 3: model training
The training process of the whole model is divided into two stages of unsupervised training and supervised training.
In the non-supervision training stage, input sample data is used as a target value, a loss function and accuracy are used as references, the non-supervision training is carried out on the depth constraint variation self-encoder model built in the step 2, and the loss function is expressed as:
wherein ELBO is the loss function, X is the raw data,number of reconstructionsAccording to m represents the dimension of the hidden variable, μ (i) Is a component of a normally distributed mean vector, +.>Is a component of a normally distributed variance vector.
In the supervised training stage, the low-dimensional hidden features extracted from the encoder model by the depth constraint variation and the supervised completed label information are used as input of an extreme learning machine model together, and the model is trained by taking accuracy as a reference.
In the model training process, the function loss value and the recognition accuracy recorded after multiple iterations are used as the judgment basis of the model recognition effect. After training is finished, selecting the model with the minimum function loss value and highest recognition accuracy as the final model.
Step 4: monitoring by adopting the final model obtained in the step 3
Collecting and preprocessing feed current data in a step 1 mode to obtain an image sample; the image samples are input into the final network model to identify the wear state of the tool, the identification result of which is shown in fig. 5. The three methods of a variation self-encoder and a machine-limited learning machine (VAE-ELM), a variation self-encoder and a support vector machine (VAE-SVM) and a stacked sparse self-encoding network (SSAE-softMax) are adopted to carry out comparison verification with the tool wear state identification method based on the depth constraint variation self-encoder and the extreme learning machine (DCVAE-ELM), and the identification results obtained after different models are operated for a plurality of times are averaged, as shown in the table 2.
Table 2 comparison of the identification results of different methods
In summary, according to the experimental analysis shown in fig. 4a-1, fig. 4a-2, fig. 4b-1, fig. 4b-2, fig. 4b-3, fig. 4c-1, fig. 4c-2, the feed current and the milling force have the same trend, which indicates that the feed current can reflect the milling force and thus the change of the wear state of the tool. Thus, the feed current signal is selected as the monitoring signal. Meanwhile, the method provided by the invention is used for identifying the abrasion state of the cutter, and the obtained identification accuracy is higher, so that the method can more accurately identify different abrasion states of the cutter.
Claims (1)
1. An end mill wear state identification method based on deep learning is characterized by comprising the following steps:
step 1: acquisition and pretreatment of feed current
A Hall current clamp sensor is used for collecting three-phase alternating current signals of a feeding motor of the numerical control machine; synthesizing the three-phase alternating current signals into current effective values and performing standardized treatment; then making the normalized current effective value signal into an image sample; dividing an image sample according to different cutter abrasion states to obtain data sets in different abrasion states, wherein the data set in each abrasion state comprises a training set and a testing set;
step 2: constructing a network model of a depth constraint variation self-encoder and an extreme learning machine
The network model consists of a depth constraint variation self-encoder and an extreme learning machine; wherein the depth constraint variation self-encoder is composed of an encoder, a hidden layer and a decoder; the extreme learning machine consists of an input layer, a hidden layer and an output layer;
the depth constraint variation realizes the feature extraction of input data from an encoder in the encoder; the hidden layer samples according to the extracted characteristic information to obtain a hidden variable; the decoder reconstructs the input data by hiding the variables; the encoder learns input data to obtain posterior distribution of the input data, the hidden layer samples characteristic information in the posterior distribution to obtain hidden variables, the hidden variables are input into the decoder to reconstruct so as to restore the input data, and the hidden variables are extracted to low-dimensional characteristic information and serve as input layer data of the extreme learning machine; the hidden layer of the extreme learning machine classifies the low-dimensional characteristic information in the input layer so as to generate an output result of the whole model;
step 3: model training
The training process of the whole model is divided into two stages of unsupervised training and supervised training;
in the non-supervision training stage, input sample data is used as a target value, a loss function and accuracy are used as references, the non-supervision training is carried out on the depth constraint variation self-encoder model built in the step 2, and the loss function is expressed as:
wherein ELBO is the loss function, X is the raw data,reconstruction data, m represents the dimension of the hidden variable, μ (i) Is a component of a normally distributed mean vector, +.>A component of a normally distributed variance vector;
in the supervised training stage, the low-dimensional characteristic information extracted from the encoder model by the depth constraint variation and the supervised label information are used as input of the extreme learning machine model together, and the extreme learning machine model is trained by taking the accuracy as a reference;
in the model training process, the function loss value and the recognition accuracy recorded after multiple iterations are used as the judgment basis of the model recognition effect; after training is finished, selecting a model with the minimum function loss value and the highest recognition accuracy as a final model;
step 4: monitoring by adopting the final model obtained in the step 3
Collecting and preprocessing feed current data in a step 1 mode to obtain an image sample; the image sample is input into the final network model obtained in the step 3 to identify the abrasion state of the cutter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310684765.0A CN116881765A (en) | 2023-06-12 | 2023-06-12 | End mill wear state identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310684765.0A CN116881765A (en) | 2023-06-12 | 2023-06-12 | End mill wear state identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116881765A true CN116881765A (en) | 2023-10-13 |
Family
ID=88257556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310684765.0A Pending CN116881765A (en) | 2023-06-12 | 2023-06-12 | End mill wear state identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116881765A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117001423A (en) * | 2023-09-28 | 2023-11-07 | 智能制造龙城实验室 | Tool state online monitoring method based on evolutionary learning |
-
2023
- 2023-06-12 CN CN202310684765.0A patent/CN116881765A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117001423A (en) * | 2023-09-28 | 2023-11-07 | 智能制造龙城实验室 | Tool state online monitoring method based on evolutionary learning |
CN117001423B (en) * | 2023-09-28 | 2023-12-05 | 智能制造龙城实验室 | Tool state online monitoring method based on evolutionary learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110153802B (en) | Tool wear state identification method based on convolution neural network and long-term and short-term memory neural network combined model | |
CN111633467B (en) | Cutter wear state monitoring method based on one-dimensional depth convolution automatic encoder | |
CN110509109B (en) | Cutter wear monitoring method based on multi-scale depth convolution cyclic neural network | |
CN111914883B (en) | Spindle bearing state evaluation method and device based on deep fusion network | |
CN113128561A (en) | Machine tool bearing fault diagnosis method | |
CN110647943A (en) | Cutting tool wear monitoring method based on evolutionary data cluster analysis | |
CN111687689A (en) | Cutter wear state prediction method and device based on LSTM and CNN | |
CN112650146B (en) | Fault diagnosis optimization method, system and equipment of numerical control machine tool under multiple working conditions | |
Huang et al. | Tool wear monitoring with vibration signals based on short‐time Fourier transform and deep convolutional neural network in milling | |
CN114619292B (en) | Milling cutter wear monitoring method based on fusion of wavelet denoising and attention mechanism with GRU network | |
CN116881765A (en) | End mill wear state identification method based on deep learning | |
CN110561191B (en) | Numerical control machine tool cutter abrasion data processing method based on PCA and self-encoder | |
CN113798920B (en) | Cutter wear state monitoring method based on variational automatic encoder and extreme learning machine | |
CN107451760B (en) | Rolling bearing fault diagnosis method based on time window slip limited Boltzmann machine | |
CN108581633A (en) | A method of based on the more sensor monitoring cutting tool states of genetic algorithm optimization | |
CN113850161A (en) | Flywheel fault identification method based on LSTM deep noise reduction self-encoder | |
CN111126255A (en) | Numerical control machine tool cutter wear value prediction method based on deep learning regression algorithm | |
CN109333159B (en) | Depth kernel extreme learning machine method and system for online monitoring of tool wear state | |
CN108857577A (en) | Cutting-tool wear state monitoring method and equipment | |
CN114227382B (en) | Cutter damage monitoring system and method based on novel capsule network | |
CN115741235A (en) | Wear prediction and health management method based on five-axis machining center cutter | |
CN108393744A (en) | A kind of more sensor monitoring methods of cutting tool state | |
Gao et al. | New tool wear estimation method of the milling process based on multisensor blind source separation | |
CN114354184A (en) | Deep learning-based method and device for establishing health early warning model of main shaft of large-scale rotating equipment | |
CN113627544A (en) | Machine tool milling cutter state identification method based on multi-source heterogeneous data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |