CN114833636B - Cutter wear monitoring method based on multi-feature space convolution neural network - Google Patents
Cutter wear monitoring method based on multi-feature space convolution neural network Download PDFInfo
- Publication number
- CN114833636B CN114833636B CN202210376930.1A CN202210376930A CN114833636B CN 114833636 B CN114833636 B CN 114833636B CN 202210376930 A CN202210376930 A CN 202210376930A CN 114833636 B CN114833636 B CN 114833636B
- Authority
- CN
- China
- Prior art keywords
- convolution
- feature
- size
- characteristic
- pooling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23Q—DETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
- B23Q17/00—Arrangements for observing, indicating or measuring on machine tools
- B23Q17/09—Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23Q—DETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
- B23Q17/00—Arrangements for observing, indicating or measuring on machine tools
- B23Q17/09—Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
- B23Q17/0952—Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool during machining
- B23Q17/0957—Detection of tool breakage
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a cutter wear monitoring method based on a multi-feature space convolution neural network, which comprises the following steps: s1: collecting cutting force data of a cutter in a stable cutting stage and a cutter abrasion state at a corresponding moment to construct a sample set; s2: performing feature extraction on the sample set to obtain time domain features, frequency domain features and time-frequency domain features, and performing spatial fusion on the time domain features, the frequency domain features and the time-frequency domain features after processing to obtain fusion features Concat1; s3: and inputting the fusion characteristic Concat1 into a convolutional neural network for training to obtain a cutter wear monitoring model so as to realize monitoring of cutter wear. The method improves the defects of the manual feature extraction and the single feature space in the representation capability, and realizes the self-adaptive extraction and fusion of the multi-feature space; in addition, the convolution network adopts an inclusion framework, so that the width of the model is increased, the training parameters of the model can be reduced, the representation capability of the model is improved, and the risk of overfitting is reduced.
Description
Technical Field
The invention belongs to the technical field of milling cutter wear monitoring, and particularly relates to a cutter wear monitoring method based on a multi-feature space convolution neural network.
Background
With the development of the national aerospace, aircraft carrier and weapon industries, the material performance of product parts has more strict requirements, and various engineering materials (such as ultrahigh-strength steel, titanium alloy, high-temperature alloy and the like) with high hardness, high strength and high toughness appear. The milling process is adopted to process the materials, and the problem of rapid abrasion of the cutter exists. The worn cutter not only increases the cutting force, but also reduces the quality of the machined surface, thereby affecting the service performance of parts. Therefore, the method has important significance for monitoring the tool wear state of the milling process.
The tool wear state identification actually belongs to the category of pattern identification, and is a process for extracting fault features to carry out mapping characterization, namely a nonlinear classifier is designed to map a feature space to a target space. Common non-linear classifiers are gaussian process regression, BP neural networks, and support vector products. Due to the time-shift invariance of the one-dimensional convolutional neural network, the one-dimensional convolutional neural network has been widely applied to processing timing signals such as cutting force, vibration and the like in recent years.
Although the existing cutter wear monitoring technology is developed to a certain degree, the problems of missing report, misinformation, poor real-time performance and the like still exist in the actual production working condition. There is therefore a need for a more effective tool wear monitoring method that meets the actual machining requirements.
Disclosure of Invention
Aiming at the defects and improvement requirements in the prior art, the invention provides a cutter wear monitoring method based on a multi-feature space convolution neural network. The method aims to effectively monitor the wear state of the cutter in the milling process and provide reference for cutter service life evaluation and cutter changing.
The invention achieves the above purpose through the following technical scheme:
a cutter wear monitoring method based on a multi-feature space convolution neural network comprises the following steps:
s1: collecting cutting force data of a cutter in a stable cutting stage and a cutter abrasion state at a corresponding moment to construct a sample set, and setting the cutter abrasion state into initial abrasion, normal abrasion and severe abrasion;
s2: performing feature extraction on the sample set to obtain a time domain feature, a frequency domain feature and a time-frequency domain feature, and performing spatial fusion on the time domain feature, the frequency domain feature and the time-frequency domain feature to obtain a fusion feature Concat1;
s3: and inputting the fusion characteristic Concat1 into a convolutional neural network for training, and outputting a cutter wear state to obtain a cutter wear monitoring model.
As a further optimization scheme of the invention, in the S1, a dynamometer is used for collecting cutting force data in a cutting stabilization stage in real time, a moving sliding window method is adopted for dividing the cutting force data, and the cutting force data corresponds to a cutter wear state, so that a sample set is constructed.
As a further optimization scheme of the present invention, the specific steps of obtaining the fusion characteristic Concat1 in S2 are as follows:
s201: denoising the sample set by adopting discrete wavelet transform to obtain time domain characteristics; performing fast Fourier transform on the time domain characteristic to obtain a frequency domain characteristic; wavelet coefficients obtained by wavelet transformation of the time domain features are used as time-frequency domain features;
s202: performing convolution operation on the time domain characteristic, the frequency domain characteristic and the time-frequency domain characteristic with a convolution kernel with the size of 5 multiplied by 5 to obtain a convolution characteristic C1, a convolution characteristic C2 and a convolution characteristic C3;
s203: pooling the convolution characteristic C1, the convolution characteristic C2 and the convolution characteristic C3 with a pooling kernel with the size of 3 × 3 to obtain a pooling characteristic P1, a pooling characteristic P2 and a pooling characteristic P3, wherein the pooling step is 3;
s204: and splicing and fusing the pooled feature P1, the pooled feature P2 and the pooled feature P3 to obtain the fused feature Concat1 with the size of 165 x 96.
As a further optimization scheme of the present invention, in S201, the sample set is denoised by using a discrete wavelet transform, where a wavelet basis function used in the discrete wavelet transform is Daubechies wavelet, and a scale function phi (x) and a wavelet function psi (x) thereof can be expressed as follows:
φ(x)=∑ k a k φ(2x-k)
ψ(x)=∑ k b k φ(2x-k)
wherein k is the translation amount, a k And b k The coefficients of the scale function and the wavelet function, respectively.
As a further optimization scheme of the present invention, the convolutional neural network in S3 adopts an inclusion architecture.
As a further optimization scheme of the present invention, the specific steps of S3 are as follows:
s301: performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C4 with the size of 165 × 32; performing non-size-change pooling operation on the fusion feature Concat1 to obtain a pooling feature P4 with the size of 165 x 96, and performing convolution operation on the pooling feature P4 and a convolution kernel with the size of 1 x 1 to obtain a convolution feature C5 with the size of 165 x 32; performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C6 with the size of 165 × 48, and performing convolution operation on the convolution feature C6 and a convolution kernel with the size of 3 × 3 to obtain a convolution feature C7 with the size of 165 × 64; performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C8 with the size of 165 × 16, and performing convolution operation on the convolution feature C8 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C9 with the size of 165 × 16;
s302: splicing and fusing the convolution feature C4, the convolution feature C5, the convolution feature C7 and the convolution feature C9 to obtain a fused feature Concat2 with the size of 165 x 144;
s303: performing convolution operation on the fusion feature Concat2 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C10 with the size of 161 × 32;
s304: pooling the convolution characteristic C10, wherein the size of a pooling kernel is 3 multiplied by 3, and the pooling step length is 3, so that a pooling characteristic P5 with the size of 53 multiplied by 32 is obtained;
s305: performing convolution operation on the pooled feature P5 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C11 with the size of 49 × 32;
s306: performing global pooling on the convolution characteristic C11 to obtain a pooled characteristic P6 with the size of 32;
s307: and connecting the pooling feature P6 with a full connection layer, obtaining a classification result through a Softmax classifier, and reversely propagating errors to obtain a trained cutter wear monitoring model.
As a further optimization scheme of the present invention, the convolution operations are all one-dimensional convolution, and the convolution layer operations are as follows:
in the formula, a l-1 Is a layer l-1 feature matrix, K l For the first layer of convolution kernel, the convolution kernel is moved only along the time axis for one-dimensional convolution operation, b l For layer l bias vectors, g is the ReLU activation function.
As a further optimization scheme of the present invention, the cross entropy loss function of the Softmax classifier in S307 is as follows:
loss=-∑ i y i logf(z i )
in the formula, y i Is the true label of the current sample, z i For the output junction value of the fully-connected layer, f is the Softmax function.
The invention has the beneficial effects that:
the method improves the defects of the manual feature extraction and the single feature space in the representation capability, and realizes the self-adaptive extraction and fusion of the multi-feature space; in addition, the convolution network adopts an inclusion framework, so that the width of the model is increased, the training parameters of the model can be reduced, the representation capability of the model is improved, and the risk of overfitting is reduced. The invention has the advantages of strong applicability, high monitoring precision and the like, can accurately identify the wear state of the cutter, and meets the requirements in actual processing.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a block diagram of a method embodiment of the present invention;
FIGS. 3 (a) - (c) are three-dimensional cutting force sample data for the initial, normal and severe tool wear periods, respectively;
FIGS. 4 (a) - (c) are micrographs of tool initial wear, normal wear and severe wear, respectively;
fig. 5 is an inclusion convolutional neural network architecture.
Detailed Description
The present application will now be described in further detail with reference to the drawings, it should be noted that the following detailed description is given for illustrative purposes only and is not to be construed as limiting the scope of the present application, as those skilled in the art will be able to make numerous insubstantial modifications and adaptations to the present application based on the above disclosure.
Example 1
As shown in fig. 1-5, a tool wear monitoring method based on multi-feature spatial convolution neural network includes the following steps:
(1) And (3) carrying out a cutter abrasion test, using a dynamometer to collect three-dimensional cutting force signals in real time, and observing the abrasion state of the cutter at certain intervals of cutting length. Taking out cutting force data in a stable cutting stage, dividing a data set by adopting a moving sliding window method, and further constructing a sample set (as shown in figure 3); wherein, the sampling frequency is 10k, and the sample set size is 7680 × 500 × 3;
(2) Labeling a sample set according to the cutter wear state (shown in figure 4) observed in the cutter wear test in the step (1), wherein the label set comprises 3 types of initial wear, normal wear and severe wear;
(3) Denoising the signals by adopting Discrete Wavelet Transform (DWT) in a time domain space aiming at the sample set constructed in the step (1); further, performing Fast Fourier Transform (FFT) on the denoised time domain signal to obtain a frequency domain signal; in addition, wavelet coefficients obtained by Wavelet Transform (WT) of the time domain signals are used as time-frequency domain signals; before model training, in order to ensure that sample data is in a smaller range, the time domain, the frequency domain and the time-frequency domain signals are respectively subjected to normalization processing;
the wavelet basis function used in the discrete wavelet transform is Daubechies wavelet, and the scale function phi (x) and the wavelet function psi (x) can be expressed as follows:
φ(x)=∑ k a k φ(2x-k)
ψ(x)=∑ k b k φ(2x-k)
wherein k is the translation amount, a k And b k The coefficients of the scale function and the wavelet function, respectively.
(4) Training the features obtained after the feature processing in the step (3) as the input of a convolutional neural network, and outputting the features as labels pasted in the process (2); the convolutional neural network adopts an inclusion architecture as shown in fig. 5, and the training process specifically includes:
(4.1) carrying out convolution operation on the time domain signal and a convolution kernel with the size of 5 multiplied by 5 to obtain 496 multiplied by 32 convolution characteristic C1; then, performing pooling operation, wherein the size of a pooling core is 3 × 3, and the pooling step length is 3, so as to obtain a pooling characteristic P1 of 165 × 32; performing the same convolution and pooling operation on the frequency domain signal and the time-frequency domain signal to respectively obtain pooling characteristics P2 and P3;
the convolution operations described in the present invention are all one-dimensional convolutions, and the convolution layer operations can be expressed as follows:
in the formula, a l-1 Is a layer l-1 feature matrix, K l Performing one-dimensional convolution operation on the first layer of convolution kernel and the second layer of convolution kernel, namely, the convolution kernels only move along a time axis; in addition, b l For the l-th layer bias vector, g is the ReLU activation function.
(4.2) splicing and fusing the pooling features P1, P2 and P3 along the last dimension to obtain a fusion feature Concat1 with the size of 165 x 96;
(4.3) performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C4 of 165 × 32;
(4.4) performing non-size-change pooling operation on the fusion feature Concat1 to obtain 165 x 96 pooled feature P4, and further performing convolution operation on the pooled feature P4 and a convolution kernel with the size of 1 x 1 to obtain 165 x 32 convolution feature C5;
(4.5) performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C6 with the size of 165 × 48, and further performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 3 × 3 to obtain a convolution feature C7 with the size of 165 × 64;
(4.6) performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C8 with the size of 165 × 16, and further performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C9 with the size of 165 × 16;
(4.7) splicing and fusing the convolution characteristics C4, C5, C7 and C9 along the last dimension to obtain a fusion characteristic Concat2 with the size of 165 x 144;
(4.8) performing convolution operation on the fusion feature Concat2 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C10 with the size of 161 × 32; then, carrying out pooling operation, wherein the size of a pooling core is 3 multiplied by 3, and the pooling step length is 3, so as to obtain a pooling characteristic P5 of 53 multiplied by 32;
(4.9) performing convolution operation on the pooled feature P5 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C11 of 49 × 32, and further performing global pooling operation to obtain a pooled feature P6 with the size of 32;
(4.10) connecting the pooled feature P6 with the full-link layer, obtaining a classification result through a Softmax classifier, and reversely propagating errors.
Wherein, the Softmax cross entropy loss function can be expressed as follows:
loss=-∑ i y i logf(z i )
in which i and j represent a certain class, y i Is the true label of the current sample, z i For the output junction value of the fully-connected layer, f is the Softmax function.
The invention adopts a self-built tool wear data set which comprises three-way cutting force data and a tool wear state label. The training sample for fault classification is 5376 and the test sample is 2304. Through parameter optimization, the testing precision reaches more than 99.87%, and the tool wear monitoring requirement in actual machining can be well met.
The above-mentioned embodiments only express one embodiment of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
Claims (7)
1. A cutter wear monitoring method based on a multi-feature space convolution neural network is characterized by comprising the following steps:
s1: collecting cutting force data of a cutter in a stable cutting stage and a cutter abrasion state at a corresponding moment to construct a sample set, and setting the cutter abrasion state into initial abrasion, normal abrasion and severe abrasion;
s2: performing feature extraction on the sample set to obtain time domain features, frequency domain features and time-frequency domain features, and performing spatial fusion on the time domain features, the frequency domain features and the time-frequency domain features after processing to obtain fusion features Concat1;
s3: inputting the fusion characteristic Concat1 into a convolutional neural network for training, and outputting a cutter wear state to obtain a cutter wear monitoring model so as to realize monitoring of cutter wear;
the specific steps for obtaining the fusion characteristic Concat1 in the S2 are as follows:
s201: denoising the sample set by adopting discrete wavelet transform to obtain time domain characteristics; performing fast Fourier transform on the time domain characteristic to obtain a frequency domain characteristic; wavelet coefficients obtained by wavelet transformation of the time domain features are used as time-frequency domain features;
s202: performing convolution operation on the time domain characteristic, the frequency domain characteristic and the time-frequency domain characteristic and a convolution kernel with the size of 5 multiplied by 5 respectively to obtain a convolution characteristic C1, a convolution characteristic C2 and a convolution characteristic C3;
s203: performing pooling treatment on the convolution characteristic C1, the convolution characteristic C2 and the convolution characteristic C3 and a pooling kernel with the size of 3 x 3 to obtain a pooling characteristic P1, a pooling characteristic P2 and a pooling characteristic P3, wherein the pooling step length is 3;
s204: splicing and fusing the pooling features P1, the pooling features P2 and the pooling features P3 to obtain the fused feature Concat1 with the size of 165 x 96.
2. The tool wear monitoring method based on the multi-feature spatial convolution neural network as claimed in claim 1, characterized in that: and S1, collecting cutting force data of a cutting stable stage in real time by using a dynamometer, dividing the cutting force data by adopting a movable sliding window method, and corresponding to the wear state of the cutter, thereby constructing a sample set.
3. The tool wear monitoring method based on the multi-feature space convolutional neural network as claimed in claim 1, characterized in that: in S201, the sample set is denoised by using a discrete wavelet transform, where a wavelet basis function used in the discrete wavelet transform is Daubechies wavelet, and a scale function phi (x) and a wavelet function psi (x) thereof can be expressed as follows:
φ(x)=∑ k a k φ(2x-k)
ψ(x)=∑ k b k φ(2x-k)
wherein k is the translation amount, a k And b k The coefficients of the scale function and the wavelet function, respectively.
4. The tool wear monitoring method based on the multi-feature space convolutional neural network as claimed in claim 1, characterized in that: the convolutional neural network in the S3 adopts an inclusion framework.
5. The tool wear monitoring method based on the multi-feature space convolutional neural network as claimed in claim 1, characterized in that: the specific steps of S3 are as follows:
s301: performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C4 with the size of 165 × 32; performing size-change-free pooling operation on the fused feature Concat1 to obtain a pooled feature P4 with the size of 165 x 96, and performing convolution operation on the pooled feature P4 and a convolution kernel with the size of 1 x 1 to obtain a convolution feature C5 with the size of 165 x 32; performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C6 with the size of 165 × 48, and performing convolution operation on the convolution feature C6 and a convolution kernel with the size of 3 × 3 to obtain a convolution feature C7 with the size of 165 × 64; performing convolution operation on the fusion feature Concat1 and a convolution kernel with the size of 1 × 1 to obtain a convolution feature C8 with the size of 165 × 16, and performing convolution operation on the convolution feature C8 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C9 with the size of 165 × 16;
s302: splicing and fusing the convolution feature C4, the convolution feature C5, the convolution feature C7 and the convolution feature C9 to obtain a fused feature Concat2 with the size of 165 x 144;
s303: performing convolution operation on the fusion feature Concat2 and a convolution kernel with the size of 5 × 5 to obtain a convolution feature C10 with the size of 161 × 32;
s304: pooling the convolution characteristic C10, wherein the size of a pooling kernel is 3 x 3, and the pooling step length is 3, so as to obtain a pooling characteristic P5 with the size of 53 x 32;
s305: performing convolution operation on the pooled feature P5 and a convolution kernel with the size of 5 multiplied by 5 to obtain a convolution feature C11 with the size of 49 multiplied by 32;
s306: performing global pooling on the convolution characteristic C11 to obtain a pooled characteristic P6 with the size of 32;
s307: and connecting the pooling feature P6 with a full connection layer, obtaining a classification result through a Softmax classifier, and reversely propagating errors to obtain a trained cutter wear monitoring model.
6. The tool wear monitoring method based on the multi-feature space convolutional neural network as claimed in claim 5, wherein: the convolution operations are all one-dimensional convolution, and the convolution layer operation is as follows:
in the formula, a l-1 Is a layer l-1 feature matrix, K l For the first layer of convolution kernel, the convolution kernel is moved only along the time axis for one-dimensional convolution operation, b l For the l-th layer bias vector, g is the ReLU activation function.
7. The tool wear monitoring method based on the multi-feature spatial convolution neural network as claimed in claim 5, characterized in that: the cross entropy loss function of the Softmax classifier in S307 is as follows:
loss=-∑ i y i logf(z i )
in the formula, y i Is the true label of the current sample, z i For the output junction value of the fully-connected layer, f is the Softmax function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210376930.1A CN114833636B (en) | 2022-04-12 | 2022-04-12 | Cutter wear monitoring method based on multi-feature space convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210376930.1A CN114833636B (en) | 2022-04-12 | 2022-04-12 | Cutter wear monitoring method based on multi-feature space convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114833636A CN114833636A (en) | 2022-08-02 |
CN114833636B true CN114833636B (en) | 2023-02-28 |
Family
ID=82564454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210376930.1A Active CN114833636B (en) | 2022-04-12 | 2022-04-12 | Cutter wear monitoring method based on multi-feature space convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114833636B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11267949A (en) * | 1998-03-20 | 1999-10-05 | Kawasaki Heavy Ind Ltd | Device and method for detecting wear of tool |
CN102172849A (en) * | 2010-12-17 | 2011-09-07 | 西安交通大学 | Cutter damage adaptive alarm method based on wavelet packet and probability neural network |
CN104741638A (en) * | 2015-04-20 | 2015-07-01 | 江苏师范大学 | Turning cutter wear state monitoring system |
CN106874889A (en) * | 2017-03-14 | 2017-06-20 | 西安电子科技大学 | Multiple features fusion SAR target discrimination methods based on convolutional neural networks |
CN108319962A (en) * | 2018-01-29 | 2018-07-24 | 安徽大学 | A kind of Tool Wear Monitoring method based on convolutional neural networks |
CN109753923A (en) * | 2018-12-29 | 2019-05-14 | 晋西车轴股份有限公司 | Monitoring method, system, equipment and the computer readable storage medium of tool abrasion |
CN110153801A (en) * | 2019-07-04 | 2019-08-23 | 西南交通大学 | A kind of cutting-tool wear state discrimination method based on multi-feature fusion |
CN113664612A (en) * | 2021-08-24 | 2021-11-19 | 沈阳工业大学 | Numerical control machine tool milling cutter abrasion real-time monitoring method based on deep convolutional neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IT202000010255A1 (en) * | 2020-05-07 | 2021-11-07 | Leonardo Spa | REAL-TIME MONITORING OF THE USE AND WEAR OF TOOLS FOR MECHANICAL WORKING FOR AN INTELLIGENT MANAGEMENT OF A TOOL PARK |
CN111633467B (en) * | 2020-05-15 | 2021-07-16 | 大连理工大学 | Cutter wear state monitoring method based on one-dimensional depth convolution automatic encoder |
CN111761409A (en) * | 2020-07-09 | 2020-10-13 | 内蒙古工业大学 | Multi-sensor numerical control machine tool cutter wear monitoring method based on deep learning |
-
2022
- 2022-04-12 CN CN202210376930.1A patent/CN114833636B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11267949A (en) * | 1998-03-20 | 1999-10-05 | Kawasaki Heavy Ind Ltd | Device and method for detecting wear of tool |
CN102172849A (en) * | 2010-12-17 | 2011-09-07 | 西安交通大学 | Cutter damage adaptive alarm method based on wavelet packet and probability neural network |
CN104741638A (en) * | 2015-04-20 | 2015-07-01 | 江苏师范大学 | Turning cutter wear state monitoring system |
CN106874889A (en) * | 2017-03-14 | 2017-06-20 | 西安电子科技大学 | Multiple features fusion SAR target discrimination methods based on convolutional neural networks |
CN108319962A (en) * | 2018-01-29 | 2018-07-24 | 安徽大学 | A kind of Tool Wear Monitoring method based on convolutional neural networks |
CN109753923A (en) * | 2018-12-29 | 2019-05-14 | 晋西车轴股份有限公司 | Monitoring method, system, equipment and the computer readable storage medium of tool abrasion |
CN110153801A (en) * | 2019-07-04 | 2019-08-23 | 西南交通大学 | A kind of cutting-tool wear state discrimination method based on multi-feature fusion |
CN113664612A (en) * | 2021-08-24 | 2021-11-19 | 沈阳工业大学 | Numerical control machine tool milling cutter abrasion real-time monitoring method based on deep convolutional neural network |
Non-Patent Citations (1)
Title |
---|
基于小波神经网络的切削刀具状态监测;冯冀宁等;《中国机械工程》;20040415(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114833636A (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Intelligent acoustic-based fault diagnosis of roller bearings using a deep graph convolutional network | |
CN111695209B (en) | Rotary machine small sample health assessment method driven by meta-deep learning | |
Liu et al. | Tool wear estimation using a CNN-transformer model with semi-supervised learning | |
CN110991295B (en) | Self-adaptive fault diagnosis method based on one-dimensional convolutional neural network | |
CN112668105B (en) | Helicopter transmission shaft abnormity judgment method based on SAE and Mahalanobis distance | |
CN112881942B (en) | Abnormal current diagnosis method and system based on wavelet decomposition and empirical mode decomposition | |
Chen et al. | Compound fault diagnosis for industrial robots based on dual-transformer networks | |
CN112907561A (en) | Notebook appearance flaw detection method based on deep learning | |
CN111105082A (en) | Workpiece quality prediction model construction method and prediction method based on machine learning | |
CN115824519B (en) | Comprehensive diagnosis method for valve leakage faults based on multi-sensor information fusion | |
CN114833636B (en) | Cutter wear monitoring method based on multi-feature space convolution neural network | |
CN114997276A (en) | Heterogeneous multi-source time sequence data abnormity identification method for compression molding equipment | |
CN116935892A (en) | Industrial valve anomaly detection method based on audio key feature dynamic aggregation | |
Zhao et al. | An intelligent diagnosis method of rolling bearing based on multi-scale residual shrinkage convolutional neural network | |
CN111291918A (en) | Rotating machine degradation trend prediction method based on stationary subspace exogenous vector autoregression | |
Wang et al. | Prediction of hot-rolled strip crown based on Boruta and extremely randomized trees algorithms | |
CN117171544B (en) | Motor vibration fault diagnosis method based on multichannel fusion convolutional neural network | |
Cheng et al. | Research on multi-signal milling tool wear prediction method based on GAF-ResNext | |
CN117372431A (en) | Image detection method of nano-imprint mold | |
Yu et al. | Fault feature of gearbox vibration signals based on morphological filter dynamic convolution autoencoder | |
CN106679911B (en) | Beam type structure damage identification method based on multi-scale data fusion theory | |
Zhu et al. | Tool wear condition monitoring based on multi-sensor integration and deep residual convolution network | |
CN116383739B (en) | Intelligent fault diagnosis method based on domain self-adaption multi-mode data fusion | |
CN116883393A (en) | Metal surface defect detection method based on anchor frame-free target detection algorithm | |
CN111563455A (en) | Damage identification method based on time series signal and compressed convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |