CN112712063B - Tool wear value monitoring method, electronic device and storage medium - Google Patents

Tool wear value monitoring method, electronic device and storage medium Download PDF

Info

Publication number
CN112712063B
CN112712063B CN202110062495.0A CN202110062495A CN112712063B CN 112712063 B CN112712063 B CN 112712063B CN 202110062495 A CN202110062495 A CN 202110062495A CN 112712063 B CN112712063 B CN 112712063B
Authority
CN
China
Prior art keywords
dimensional
data sequence
wear value
dimensional data
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110062495.0A
Other languages
Chinese (zh)
Other versions
CN112712063A (en
Inventor
黄海松
滕瑞
陈启鹏
杨凯
范青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202110062495.0A priority Critical patent/CN112712063B/en
Publication of CN112712063A publication Critical patent/CN112712063A/en
Application granted granted Critical
Publication of CN112712063B publication Critical patent/CN112712063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Machine Tool Sensing Apparatuses (AREA)

Abstract

The application relates to a cutter wear value monitoring method, an electronic device and a storage medium, wherein the cutter wear value monitoring method comprises the following steps: the method comprises the steps of obtaining sensor data of a cutter in real time to obtain a multi-dimensional data sequence, wherein the multi-dimensional data sequence comprises multi-dimensional data collected by the sensor according to a time sequence, carrying out wavelet transformation on the multi-dimensional data sequence to obtain an energy characteristic value of the multi-dimensional data sequence, integrating the energy characteristic value of the multi-dimensional data sequence into a one-dimensional data sequence, imaging the one-dimensional data sequence to obtain an intermediate image, and predicting the real-time wear value of the cutter according to the intermediate image by using a cutter wear value prediction model.

Description

Tool wear value monitoring method, electronic device and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a tool wear value monitoring method, an electronic device, and a storage medium.
Background
The continuous fusion of the information technology and the traditional manufacturing industry promotes the transformation and upgrading pace of manufacturing enterprises, and provides new requirements for the active perception of a manufacturing process system and the improvement of the autonomous decision making capability of an intelligent factory. The milling technology is one of the most widely applied technologies in manufacturing industry, and the continuous progress and improvement of the technology have important influence on the manufacturing industry. During actual shop machining, changes in the wear state of the tool can greatly affect the machining accuracy and surface integrity. In order to avoid unexpected shutdown or part scrapping, the service life of the cutter can be replaced in advance if the cutter cannot be fully utilized. In the machining process, the tool abrasion condition is monitored, and the tool abrasion condition monitoring method is very important for improving the workpiece machining quality and reducing the machining cost.
In the prior art, the classification of three types of states (light, moderate and severe) of machine tool cutter abrasion is focused on in the machining process, the cutter is replaced before the cutter reaches severe abrasion, and the real-time abrasion value of the cutter cannot be predicted when the abrasion degree of the cutter is strictly required in precision machining.
At present, no effective solution is provided for the problem that the real-time wear value of the cutter cannot be predicted in the related technology.
Disclosure of Invention
The embodiment of the application provides a cutter wear value monitoring method, an electronic device and a storage medium, so as to at least solve the problem that the real-time wear value of a cutter cannot be predicted in the related art.
In a first aspect, an embodiment of the present application provides a tool wear value monitoring method, including:
acquiring sensor data of a cutter in real time to obtain a multi-dimensional data sequence, wherein the multi-dimensional data sequence comprises multi-dimensional data acquired by a sensor according to a time sequence;
performing wavelet transformation on the multi-dimensional data sequence to obtain an energy characteristic value of the multi-dimensional data sequence;
integrating the energy characteristic values of the multi-dimensional data sequence into a one-dimensional data sequence;
imaging the one-dimensional data sequence to obtain an intermediate image;
and predicting the real-time wear value of the cutter according to the intermediate image by using a cutter wear value prediction model.
In some of these embodiments, the method further comprises:
acquiring all historical sensor data acquired by a sample cutter during each cutting in a whole life cycle and an actual cutter abrasion value during each cutting, and processing the historical sensor data to obtain a sample image;
constructing a cutter wear value prediction model based on a convolutional neural network, inputting the sample image to the cutter wear value prediction model, and obtaining a predicted cutter wear value;
and obtaining a loss value according to the predicted cutter wear value and the actual cutter wear value, training the cutter wear value prediction model according to the loss value, determining model parameters of the cutter wear value prediction model, and obtaining the trained cutter wear value prediction model.
In some embodiments, determining X, Y, Z the axis direction according to a rectangular spatial coordinate system, where the multidimensional data includes X, Y, Z axis direction force signals, X, Y, Z axis direction acceleration signals and acoustic emission signals, and performing wavelet transform on the multidimensional data sequence to obtain energy characteristic values of the multidimensional data sequence includes:
splicing the force signals in the X, Y, Z axis direction into one-dimensional force signals, and splicing the acceleration signals in the X, Y, Z axis direction into one-dimensional acceleration signals;
and performing wavelet transformation on the one-dimensional force signal, the one-dimensional acceleration signal and the acoustic emission signal to obtain an energy characteristic value of each signal.
In some of these embodiments, integrating the energy characteristic values of the multi-dimensional data series into a one-dimensional data series includes:
and processing X, Y, Z the force signal in the axial direction, X, Y, Z the acceleration signal in the axial direction and the energy characteristic value of the acoustic emission signal to obtain the energy characteristic value of each signal after dimension reduction, and integrating the energy characteristic values of each signal after dimension reduction to obtain a one-dimensional data sequence.
In some of these embodiments, processing X, Y, Z the energy eigenvalues of the force signal in the axial direction, the acceleration signal in the X, Y, Z axial direction, and the acoustic emission signal to obtain the energy eigenvalues of each signal after dimensionality reduction, and integrating the energy eigenvalues of each signal after dimensionality reduction to obtain the one-dimensional data sequence includes:
transposing the one-dimensional energy characteristic value of each signal to obtain an original P-dimensional characteristic vector of each signal;
dividing the original P-dimensional feature vector into N sections, calculating the average value of the feature vector of each section, and taking the average value of the feature vector of each section as output to obtain the N-dimensional feature vector of each signal, wherein P is greater than N, and N is greater than or equal to 1;
transposing the N-dimensional characteristic vectors of the signals to obtain one-dimensional characteristic vectors of the signals, and integrating the one-dimensional characteristic vectors of the signals to obtain a one-dimensional data sequence.
In some embodiments, performing wavelet transform on the multidimensional data sequence to obtain the energy characteristic value of the multidimensional data sequence includes:
and performing secondary sampling on the multi-dimensional data sequence and cutting the multi-dimensional data sequence into M sampling points, wherein M is more than or equal to 1.
In some embodiments, the step of patterning the one-dimensional data sequence to obtain an intermediate image comprises:
zooming all data of the one-dimensional data sequence to a preset interval to obtain a zoomed one-dimensional data sequence, wherein the one-dimensional data sequence comprises collected multidimensional data and a timestamp corresponding to the multidimensional data;
mapping the zoomed one-dimensional data sequence into point data under polar coordinates, wherein the point data comprises all points mapped by the one-dimensional data sequence under the polar coordinates, and the time stamp and the multidimensional data are respectively converted into the radius and the angle of the point data under the polar coordinates;
and calculating the correlation among the point data under the polar coordinates to obtain an N-N intermediate image.
In some of these embodiments, calculating the correlation between the point data at polar coordinates, and obtaining an N × N intermediate image comprises:
and calculating the angle sum of one point of the point data and any other point in the point data in the polar coordinates and the angle difference of one point of the point data and any other point in the point data, and obtaining an N x N intermediate image by calculating the cosine value of the angle sum or the cosine value of the angle difference.
In some of these embodiments, the tool wear value prediction model includes a convolutional layer, a pooling layer, a flat layer, and three fully-connected layers, the method further comprising:
acquiring all historical sensor data acquired by a sample cutter during each cutting in a whole life cycle and an actual cutter abrasion value during each cutting, and processing the historical sensor data to obtain a sample image;
constructing a cutter wear value prediction model based on a ResNet residual error network, inputting the sample image to the cutter wear value prediction model, and extracting the characteristics of the sample image through the convolution layer to obtain a characteristic matrix;
inputting the characteristic matrix into the pooling layer, reducing the characteristic matrix by the pooling layer to obtain a reduced characteristic matrix, and inputting the reduced characteristic matrix into the flat layer to obtain an unfolded one-dimensional characteristic;
inputting the one-dimensional characteristics into three full-connection layers, and performing linear summation on the one-dimensional characteristics by setting different weights in the three full-connection layers to obtain a predicted cutter wear value.
In a second aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the tool wear value monitoring method according to the first aspect.
In a third aspect, the present application provides a storage medium, on which a computer program is stored, which when executed by a processor implements the tool wear value monitoring method according to the first aspect.
Compared with the related art, the cutter wear value monitoring method, the electronic device and the storage medium provided by the embodiment of the application obtain the multidimensional data sequence by acquiring the sensor data of the cutter in real time, wherein the multidimensional data sequence comprises multidimensional data acquired by the sensor according to a time sequence, the multidimensional data sequence is subjected to wavelet transformation to obtain the energy characteristic value of the multidimensional data sequence, the energy characteristic value of the multidimensional data sequence is integrated into a one-dimensional data sequence, the one-dimensional data sequence is patterned to obtain an intermediate image, and the cutter wear value prediction model is used for predicting the real-time wear value of the cutter according to the intermediate image, so that the problem that the real-time wear value of the cutter cannot be predicted is solved, the wear value of the cutter is monitored in real time, and the time for replacing the cutter can be determined more accurately.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a block diagram of a hardware structure of an application terminal of a tool wear value monitoring method according to an embodiment of the present application;
FIG. 2 is a flow chart of a tool wear value monitoring method according to an embodiment of the present application;
FIG. 3 is a flow chart of a tool wear value monitoring method according to a preferred embodiment of the present application;
FIG. 4 is a schematic view of workpiece processing according to a preferred embodiment of the present application;
FIG. 5 is a timing diagram of processing acquired sensor data in accordance with a preferred embodiment of the present application;
FIG. 6 is a schematic diagram of the respective concatenation of the triaxial force and acceleration signals into a channel of data according to a preferred embodiment of the present application;
FIG. 7 is a flow chart for converting a one-dimensional data sequence into intermediate images based on the GAF technique according to the preferred embodiment of the present application;
FIG. 8 is a schematic diagram of a GAF-based technique for converting a one-dimensional data sequence into an intermediate image according to the preferred embodiment of the present application;
FIG. 9 is a flow chart of tool wear value predictive model training in accordance with a preferred embodiment of the present application;
FIG. 10 is a schematic diagram of residual learning according to a preferred embodiment of the present application;
FIG. 11 is a schematic diagram of residual modules in accordance with a preferred embodiment of the present application;
FIG. 12 is a schematic diagram of an on-line tool wear value monitoring model according to a preferred embodiment of the present application;
FIG. 13 is a graph of the results of on-line monitoring of tool wear values in accordance with a preferred embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The method provided by the embodiment can be executed in a terminal, a computer or a similar operation device. Taking the example of the operation on the terminal, fig. 1 is a hardware structure block diagram of an application terminal of the tool wear value monitoring method according to the embodiment of the present application. As shown in fig. 1, the terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the tool wear value monitoring method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The present embodiment provides a tool wear value monitoring method, and fig. 2 is a flowchart of the tool wear value monitoring method according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, acquiring sensor data of the tool in real time to obtain a multidimensional data sequence, wherein the multidimensional data sequence comprises multidimensional data acquired by a sensor according to a time sequence.
In the present embodiment, the multidimensional data sequence includes X, Y, Z axis direction force signals, X, Y, Z axis direction acceleration signals and acoustic emission signals acquired by the sensors in time series.
It should be noted that the acoustic emission signal is generated by an ultra-high frequency stress wave pulse signal generated by the distortion of the crystal lattice of the molecule, the aggravation of the crack and the release of the material during the plastic deformation in the metal processing. The acoustic emission elastic wave energy reflects the properties of the material or component, so that a certain state of the material or component can be judged by detecting the acoustic emission signal.
Step S202, performing wavelet transformation on the multi-dimensional data sequence to obtain an energy characteristic value of the multi-dimensional data sequence.
It should be noted that Wavelet Transform (WT) is a new transform analysis method, which inherits and develops the idea of short-time fourier transform localization, and overcomes the disadvantage that the window size does not change with frequency, etc., and can provide a "time-frequency" window that changes with frequency, and is an ideal tool for signal time-frequency analysis and processing. The method is mainly characterized in that the characteristics of certain aspects of the problem can be fully highlighted through transformation, the time (space) frequency can be locally analyzed, the signal (function) is gradually subjected to multi-scale refinement through telescopic translation operation, finally, the time subdivision at the high frequency and the frequency subdivision at the low frequency are achieved, the requirements of time-frequency signal analysis can be automatically adapted, and therefore the method can be focused on any details of the signal.
Step S203, integrate the energy characteristic values of the multi-dimensional data sequence into a one-dimensional data sequence.
For example, the energy characteristic values of the force signal in the three axis directions are respectively A, B and C through wavelet transformation, the energy characteristic values of the acceleration signal in the three axis directions are respectively D, E and F through wavelet transformation, the energy characteristic value of the acoustic emission signal is obtained through wavelet transformation, the three axis directions are respectively X, Y, Z axis directions, the energy characteristic values A, B, C of the force signal in the three axis directions are spliced into a one-dimensional energy characteristic value related to the force signal, the energy characteristic values D, E, F of the acceleration signal in the three axis directions are spliced into a one-dimensional energy characteristic value related to the acceleration signal, and the one-dimensional energy characteristic value related to the force signal, the one-dimensional energy characteristic value related to the acceleration signal, and the energy characteristic value of the acoustic emission signal are spliced into a one-dimensional data sequence.
And step S204, imaging the one-dimensional data sequence to obtain an intermediate image.
In this embodiment, the one-dimensional data sequence may be patterned based on a gram angular field coding technique or a recursive graph.
And step S205, predicting the real-time wear value of the cutter according to the intermediate image by using the cutter wear value prediction model.
Through the steps, the sensor data of the cutter is acquired in real time to obtain a multidimensional data sequence, wherein the multidimensional data sequence comprises multidimensional data which are acquired by the sensor according to time sequence, the multidimensional data sequence is subjected to wavelet transformation to obtain the energy characteristic value of the multidimensional data sequence, the energy characteristic value of the multidimensional data sequence is integrated into a one-dimensional data sequence, the one-dimensional data sequence is imaged to obtain an intermediate image, the real-time wear value of the cutter is predicted according to the intermediate image by using a cutter wear value prediction model, the problem that the real-time wear value of the cutter cannot be predicted is solved, the wear value of the cutter is monitored in real time, so that the time for replacing the cutter can be determined more accurately, in addition, weak signals in the multidimensional data sequence are amplified through the wavelet transformation, so that the intermediate image obtained through the wavelet transformation has more detailed characteristics, the accuracy of predicting the wear value of the tool is further improved.
In some embodiments, the training method of the tool wear value prediction model comprises the following steps:
step S210, acquiring all historical sensor data acquired by the sample cutter during each cutting in the whole life cycle and the actual wear value of the cutter during each cutting, and processing the historical sensor data to obtain a sample image.
And S211, constructing a tool wear value prediction model based on the convolutional neural network, inputting the sample image to the tool wear value prediction model, and obtaining a predicted tool wear value.
And S212, obtaining a loss value according to the predicted cutter wear value and the actual cutter wear value, training a cutter wear value prediction model according to the loss value, determining model parameters of the cutter wear value prediction model, and obtaining the trained cutter wear value prediction model.
Through the steps, the sample image is input into the tool wear value prediction model to obtain a predicted tool wear value, the loss value is obtained according to the predicted tool wear value and the actual tool wear value, the tool wear value prediction model is trained according to the loss value, the training of the tool wear value prediction model is achieved, and the method is a precondition for monitoring the tool wear value in real time according to the trained tool wear value prediction model.
In some embodiments, the X, Y, Z axes are determined according to a rectangular spatial coordinate system, the multidimensional data includes X, Y, Z axes force signals, X, Y, Z axes acceleration signals and acoustic emission signals, S202, the wavelet transforming the multidimensional data sequence to obtain the energy characteristic value of the multidimensional data sequence includes the following steps:
in step S2020, the X, Y, Z axis direction force signals are combined into one-dimensional force signals, and the X, Y, Z axis direction acceleration signals are combined into one-dimensional acceleration signals.
Step S2021, performing wavelet transformation on the one-dimensional force signal, the one-dimensional acceleration signal and the acoustic emission signal to obtain energy characteristic values of the signals.
Through the steps, the X, Y, Z axial force signal and the acceleration signal are respectively spliced into the one-dimensional force signal and the one-dimensional acceleration signal, the weak signals in the one-dimensional force signal and the one-dimensional acceleration signal are amplified and converted into the energy characteristic values of the signals through wavelet transformation, the amplification of the weak signals in the one-dimensional force signal and the one-dimensional acceleration signal is achieved, the amplified signals are imaged and input into the tool wear value prediction model to predict the tool wear value, and the accuracy of the subsequent tool wear value detection is improved.
In some embodiments, the integrating the energy characteristic values of the multi-dimensional data sequence into the one-dimensional data sequence S203 includes:
and processing X, Y, Z the force signal in the axis direction, X, Y, Z the acceleration signal in the axis direction and the energy characteristic value of the acoustic emission signal to obtain the energy characteristic value of each signal after dimension reduction, and integrating the energy characteristic values of each signal after dimension reduction to obtain a one-dimensional data sequence.
By the method, the energy characteristic values of the signals after dimension reduction are integrated to obtain the one-dimensional data sequence, and the calculation amount of converting the one-dimensional data sequence into the two-dimensional image is reduced.
In some embodiments, processing X, Y, Z the energy characteristic values of the force signal in the axial direction, the acceleration signal in the X, Y, Z axial direction, and the acoustic emission signal to obtain the energy characteristic values of the signals after dimension reduction, and integrating the energy characteristic values of the signals after dimension reduction to obtain the one-dimensional data sequence includes the following steps:
step S220, transposing the one-dimensional energy characteristic value of each signal to obtain the original P-dimensional characteristic vector of each signal.
Step S221, dividing the original P-dimensional feature vector into N segments, calculating the average value of the feature vector of each segment, and taking the average value of the feature vector of each segment as output to obtain the N-dimensional feature vector of each signal, wherein P is greater than N, and N is greater than or equal to 1.
Step S222, transposing the N-dimensional feature vectors of the signals to obtain one-dimensional feature vectors of the signals, and integrating the one-dimensional feature vectors of the signals to obtain a one-dimensional data sequence.
Through the steps, the multidimensional vector is obtained by transposing the one-dimensional energy characteristic value of each signal, and the transposed multidimensional vector is transposed into the one-dimensional data sequence after dimension reduction processing, so that dimension reduction of the one-dimensional data sequence is realized, and the calculation amount for subsequently converting the one-dimensional data sequence into the two-dimensional image is reduced.
In some embodiments, performing wavelet transform on the multidimensional data sequence to obtain the energy characteristic value of the multidimensional data sequence includes: and performing secondary sampling on the multi-dimensional data sequence and cutting the multi-dimensional data sequence into M sampling points, wherein M is more than or equal to 1.
For example, the original multidimensional data sequence has 8000 sampling points, and the multidimensional data sequence is subjected to secondary sampling and is cut into 6000 sampling points.
By the method, sampling points of the multi-dimensional data sequence are reduced, and the efficiency of obtaining the corresponding energy characteristic value by wavelet transformation of the multi-dimensional data sequence is improved.
In some embodiments, S204, the step of patterning the one-dimensional data sequence to obtain the intermediate image includes the following steps:
step S230, scaling all data of the one-dimensional data sequence to a preset interval to obtain a scaled one-dimensional data sequence, where the one-dimensional data sequence includes the collected multidimensional data and a timestamp corresponding to the multidimensional data.
Step S231, mapping the scaled one-dimensional data sequence to point data under polar coordinates, where the point data includes all points mapped by the one-dimensional data sequence under polar coordinates, and the timestamp and the multidimensional data are respectively converted into a radius and an angle of the point data under polar coordinates.
Step S232, calculating the correlation between the point data under the polar coordinates, and obtaining an N × N intermediate image.
Through the steps, the one-dimensional data sequence is converted into point data under the polar coordinate, and the correlation between the point data under the polar coordinate is calculated to obtain an N-N intermediate image, so that the conversion from the one-dimensional data sequence to the two-dimensional image is realized, and the method is a precondition for inputting the converted two-dimensional image into a tool wear value prediction model for training and predicting the tool wear value in real time.
In some of these embodiments, calculating the correlation between the point data at polar coordinates, and obtaining an N × N intermediate image comprises:
and calculating the angle sum of one point of the point data in the polar coordinates and any other point in the point data and the angle difference of one point of the point data and any other point in the point data, and obtaining the N x N intermediate image by calculating the cosine value of the angle sum or the cosine value of the angle difference.
In the above manner, the one-dimensional data sequence is converted into a corresponding intermediate image based on the gram angular field coding technique.
As another possible implementation, S204, the step of patterning the one-dimensional data sequence to obtain the intermediate image includes the following steps:
step S240, using one data in the one-dimensional data sequence as a starting point, and using any other data as an end point to form a vector, and all vectors form a two-dimensional spatial track.
Step S241, solving euclidean norms of each vector in the two-dimensional space trajectory and each vector including itself, and forming a recursive matrix R by all euclidean norms.
In step S242, an intermediate image is generated by the recursive matrix R.
Through the steps, the one-dimensional data sequence is converted into the corresponding intermediate image based on the recursion graph.
In some embodiments, the tool wear value prediction model comprises a convolution layer, a pooling layer, a flat layer and three full-connected layers, and the tool wear value monitoring method further comprises the following steps:
step S250, acquiring all historical sensor data acquired by a sample cutter during each cutting in the whole life cycle and the actual wear value of the cutter during each cutting, and processing the historical sensor data to obtain a sample image;
step S251, a cutter wear value prediction model is built based on a ResNet residual error network, a sample image is input to the cutter wear value prediction model, and the characteristics of the sample image are extracted through a convolution layer to obtain a characteristic matrix;
step S252, inputting the feature matrix into the pooling layer, reducing the feature matrix in the pooling layer to obtain a reduced feature matrix, and inputting the reduced feature matrix into the horizontal layer to obtain an unfolded one-dimensional feature;
and step S253, inputting the one-dimensional characteristics into three full-connection layers, and performing linear summation on the one-dimensional characteristics by setting different weights in the three full-connection layers to obtain a predicted cutter wear value.
Through the steps, a cutter wear value prediction model is built based on a ResNet residual error network, and the cutter wear value prediction model is predicted through a convolution layer, a pooling layer, a layering layer and three full-connection layers in the cutter wear value prediction model.
In some of these embodiments, the tool wear value monitoring method further comprises the steps of:
and step S260, acquiring all historical sensor data acquired by the sample cutter during each cutting in the whole life cycle and the actual wear value of the cutter during each cutting, and processing the historical sensor data to obtain a sample image.
Step S261, a tool wear value prediction model is built based on the VGGNet network, a sample image is input to the tool wear value prediction model, and a predicted tool wear value is obtained.
It is noted that VGGNet is a deep convolutional neural network developed by oxford university computer vision composition together with Google deep mind researchers. The method explores the relation between the depth and the performance of the convolutional neural network, and successfully constructs the convolutional neural network with 16-19 layers of depth by repeatedly stacking 3x3 small convolutional kernels and 2 x 2 maximum pooling layers.
And S262, acquiring a loss value according to the predicted cutter wear value and the actual cutter wear value, training a cutter wear value prediction model according to the loss value, determining model parameters of the cutter wear value prediction model, and obtaining the trained cutter wear value prediction model.
Through the steps, a tool wear value prediction model is constructed based on the VGGNet network, and the tool wear value prediction model is trained through the sample image and the actual tool wear value, so that the training of the tool wear value prediction model is completed.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
FIG. 3 is a flow chart of a tool wear value monitoring method according to a preferred embodiment of the present application. As shown in fig. 3, the process includes the following steps:
step S301, collecting sensor data generated in the process of machining a workpiece by using a cutter.
Using a tool to machine a workpiece, fig. 4 is a schematic view of machining a workpiece according to a preferred embodiment of the present application, as shown in fig. 4, the cutting force signal and the acceleration signal generated in three directions of x, y and z during the machining process are measured using the force sensor and the acceleration sensor mounted on the workpiece, an acoustic emission sensor is arranged on a workpiece to measure acoustic emission signals, the acoustic emission signals comprise high-frequency stress waves, sensor data with dimension of 7 are finally obtained through measurement, the sensor data comprise cutting force signals generated in the x direction, the y direction and the z direction, cutting acceleration signals generated in the x direction, the y direction and the z direction and the acoustic emission signals, a charge amplifier is adopted to amplify weak signals obtained through measurement of the sensor, then an acquisition card is used to acquire multidimensional sensor data, in addition, the actual wear value of the tool face is measured using a microscopic device each time the tool cuts the workpiece in the x-direction.
Step S302, the collected sensor data is processed to obtain an intermediate image associated with the sensor data.
Processing the acquired sensor data, wherein fig. 5 is a timing chart for processing the acquired sensor data according to the preferred embodiment of the present application, and as shown in fig. 5, firstly, two-time sampling and clipping are performed on seven signals, namely a triaxial force signal, a triaxial acceleration signal and an acoustic emission signal, in the sensor data to 5000 sampling points, wherein the three axes are respectively X, Y, Z axes directions in a spatial rectangular coordinate system;
the force signal and the acceleration signal after the secondary cutting are respectively composed of data in three directions (X, Y, Z), the data formats of the force signal and the acceleration signal are respectively (5000, 3), fig. 6 is a schematic diagram that the triaxial force signal and the acceleration signal are respectively spliced into channel data according to the preferred embodiment of the present application, as shown in fig. 6, the data in the X, Y, Z axes direction are spliced into channel data in a head-to-tail connection manner through front-to-back splicing, and the data formats of the force signal data and the acceleration signal are respectively (15000, 1);
decomposing the processed force signal (15000, 1), the acceleration signal (15000, 1) and the acoustic emission signal (5000, 1) into seven frequency band components by using seven layers of wavelet packet decomposition, wherein the wavelet function is the wavelet transform of db3, the input signal of each frequency band is decomposed into seven frequency band components, extracting the energy of each frequency band after the wavelet decomposition as a wavelet packet energy characteristic value, and obtaining the wavelet packet energy characteristic values { (15039, 1), (15039, 1), (5013, 1) } of each signal (the force signal, the acceleration signal and the acoustic emission signal) by splicing the seven frequency band components according to the graph 6, wherein the wavelet packet energy characteristic value of each signal is a one-dimensional characteristic;
performing data quantity dimension reduction on a wavelet packet energy characteristic value of each signal based on a Piecewise aggregation Approximation (PAA for short) technology, inverting one-dimensional data { (15039, 1), (15039, 1), (5013, 1) } into a multidimensional vector of { (1, 15039), (1, 15039), (1, 5013) }, then performing dimension reduction on the PAA into { (1, 224), (1, 224), (1, 224) }, inverting the data after dimension reduction to obtain { (224, 1), (224, 1), (224, 1) }, and performing summation on { (224, 1), (224, 1), (224, 1) } to obtain a one-dimensional data sequence with the specification size of (224, 3); the one-dimensional data sequence is converted into intermediate images (224, 224, 3) based on the Graham Angular Field (GAF) technique.
Step S303, constructing a tool wear value prediction model based on the ResNet residual error network, completing the training of the tool wear value prediction model, and obtaining the trained tool wear value prediction model.
Establishing a mapping relation between sensor data and a cutter wear value by using a convolutional neural network which is excellent in image field performance, selecting a ResNet residual network model with a residual learning network structure to construct a cutter wear value prediction model, adaptively extracting relevant characteristics among signals through a convolutional layer and a pooling layer in the cutter wear value prediction model, acquiring all historical sensor data acquired during each cutting of a sample cutter in a full life cycle and an actual cutter wear value during each cutting, converting all historical sensor data into sample images by the method of step S302, inputting the sample images into the constructed cutter wear value prediction model, acquiring a cutter wear value prediction value, acquiring a loss value according to the cutter wear value and the actual cutter wear value, training the cutter wear value prediction model according to the loss value, and determining model parameters of the cutter wear value prediction model, and obtaining a trained tool wear value prediction model.
And step S304, inputting the intermediate image into the trained tool wear value prediction model to obtain a predicted tool wear value.
Through the steps, the sensor data in the workpiece processing engineering is collected in real time, the collected sensor data is processed based on wavelet transformation and PAA technology, the processed sensor data is converted into an intermediate image based on GAF technology, the intermediate image is input into a trained tool wear value prediction model to obtain a predicted tool wear value, the tool wear value is detected in real time, a weak signal in the sensor data is amplified through the wavelet transformation, the amplified sensor data can be converted into an intermediate image with more detailed characteristics, the intermediate image with more detailed characteristics is input into the tool wear value prediction model to obtain a more accurate tool wear value prediction value, the sensor data collected in the processing process is imaged by utilizing the GAF technology, the original characteristic information of the signal is kept, and meanwhile, the time sequence characteristic information is enhanced, in addition, a ResNet residual error network model with a residual error learning network structure is selected to construct a cutter wear value prediction model, so that the problems of gradient explosion and network recession caused by network deepening can be solved, and the gradient diffusion phenomenon caused by network deepening can be avoided.
Fig. 7 is a flowchart of converting a one-dimensional data sequence into an intermediate image based on the GAF technique according to the preferred embodiment of the present application, and as shown in fig. 7, the converting the one-dimensional data sequence into the intermediate image based on the GAF technique includes the following steps:
step S701, mapping the one-dimensional data sequence acquired by the sensor into point data under polar coordinates.
One-dimensional data sequence X ═ { X) acquired by sensor1,x2,…,xnAll values of [ -1,1 ] are scaled to the interval by equation (1)]To obtain
Figure BDA0002902856870000135
Using the formula (2)
Figure BDA0002902856870000136
The value in (1) is encoded as a cosine angle θ, and equation (3) encodes the time stamp of each data in the one-dimensional data sequence as a radius r, where tiIs a time stamp, N is a one-dimensional numberThe one-dimensional data sequence X is mapped into polar coordinates by this method according to the total time period of the sequence. Fig. 8 is a schematic diagram of the conversion of a one-dimensional data sequence into an intermediate image based on the GAF technique according to the preferred embodiment of the present application, the one-dimensional data sequence being mapped to point data in polar coordinates as shown in fig. 8.
Figure BDA0002902856870000131
Figure BDA0002902856870000132
Figure BDA0002902856870000133
Step S702, converts the point data in polar coordinates into an intermediate image based on the GAF technique.
The one-dimensional data sequence under the polar coordinate system contains time-related information, so that the one-dimensional data sequence can be reconstructed by using a GAF technology, the GAF technology can generate two intermediate images through different equations, the formula (4) defines a gram angle Difference Field (GASF) based on a cosine function, the formula (5) defines a gram angle Difference Field (GASF) based on a sine function, the one-dimensional data sequence of one channel is converted into the intermediate images by the method, assuming that the specifications of force signals, acceleration signals and acoustic emission signals in the one-dimensional data sequence are (224, 1), the two-dimensional matrices converted by the GAF technology are (224, 224, 1), and the two-dimensional matrices of the three signals are integrated together to form the (224, 224, 3) intermediate image.
Figure BDA0002902856870000134
Figure BDA0002902856870000141
In the formula:
Figure BDA0002902856870000142
is composed of
Figure BDA0002902856870000143
The transposed vector of (a) is,
Figure BDA0002902856870000144
i is a unit vector [1,1, …,1 ]]And theta is obtained by the formula (2)
Figure BDA0002902856870000145
The cosine angle of each datum.
As shown in fig. 8, the one-dimensional data sequence has several obvious peaks and troughs, the initial amplitude is small, when a large peak appears, the corresponding GADF and GASF characteristic diagrams show darker colors, different characteristics such as colors, points, lines and the like in the intermediate images GADF and GASF obtained by the graham angular field technology can completely map the related information of the one-dimensional data sequence, and the shades of the colors in the images GADF and GASF represent the size of the numerical values in the one-dimensional data sequence.
Through the steps, the one-dimensional data sequence is converted into the intermediate image based on the GAF technology, and the GAF technology reconstructs the characteristic information of the sensor data in the intermediate image while retaining the data in the one-dimensional data sequence.
FIG. 9 is a flowchart of tool wear value predictive model training according to a preferred embodiment of the present application, as shown in FIG. 9, the tool wear value predictive model training includes the steps of:
and step S901, constructing a tool wear value prediction model based on the ResNet residual error network.
The overall framework of the ResNet residual network is shown in table 1, in which the ResNet101 convolutional neural network is composed of (7 × 7 × 64) -sized convolutional layers (Conv), 33 bottomless residual modules composed of 3 convolutional layers, a pooling layer, and a fully-connected layer.
TABLE 1 ResNet network layer architecture
Figure BDA0002902856870000146
(1) Residual learning network structure (Residual learning)
Fig. 10 is a schematic diagram of residual learning according to a preferred embodiment of the present application, and as shown in fig. 10, a feature x is entered into a residual module as an input, and then a network-learned data feature in the residual module is h (x). Residual learning refers to learning and adjusting corresponding parameters of residual f (X) h (X) -X between input and output data by using a plurality of internal parameter network layers.
The residual between the input and output of the residual module may use f (x). X is an identity mapping of the feature X. The mathematical expression of residual module residual learning is shown in equations (6) and (7):
yl=h(xl)+F(xl,Wl) (6)
xl+1=f(yl) (7)
in the formulae (6) and (7), xlIs the input of the l-th residual module in the ResNet network, xl+1Is the output of the input image after passing through the ith residual module in the ResNet network structure, h (x)l)=xlDenotes an identity map, F (x)l,Wl) Is a residual function, representing the residual learned by the residual module, WlThe parameter is optimized by the parameter network layer according to residual learning, and f is a Relu activation function.
According to the formula (6) and the formula (7), calculating the total learned characteristics of the input data from the 1 st layer to the L < th > layer of the network, as shown in the formula (8):
Figure BDA0002902856870000151
(2) identity Mapping (identity Mapping)
The identity mapping structure in the residual error network module refers to that the output of the current layer is directly transmitted to the next layer without passing through the parameter layer. When the number of channels of the residual network input and output is equal:
y(x)=x+F(x,Wl) (9)
when the number of input and output channels of the network is different. The channels can be aligned by a simple zero-padding operation, or W can be represented by using a 1 × 1 convolutional layersThe mapping is such that the number of input and output channels is the same, i.e.:
y(x)=Wsx+F(x,Wl) (10)
(3) bottleneck structure (Bottleneeck architecture)
The ResNet residual error network is formed by stacking two different residual error modules, namely block and bottleeck. ResNet18/34 is a stack of block residual modules containing two convolutional layers. While ResNet50/101/152 is formed by stacking bottleck residual modules including three convolutions, fig. 11 is a schematic diagram of residual modules according to the preferred embodiment of the present application, and as shown in fig. 11, a bottleneck structure includes convolutions of 1x1, 3x3 and 1x1, and by using convolution layers with a size of 1x1, convolution layers filters of a middle layer of 3x3 are not affected by an input of a previous layer, and an output of the convolution layers does not interfere with a module of a next layer, thereby achieving a purpose of saving computation time.
Since the original ResNet101 convolutional neural network is a classification network, and the problem of monitoring the tool wear value is a regression problem, the flat layer and FC (3) (three full-connection layers) are used to replace the full-connection layer of the original ResNet101, the number of neurons is (1024, 512, 1), FIG. 12 is a schematic diagram of an online tool wear value monitoring model according to the preferred embodiment of the present application, as shown in FIG. 12, the final output layer does not use an activation function, and the first two layers use a "Relu" activation function. The convolution kernel with the size of 7 × 7 × 64 and the step size of 2 is used for encoding the sensor signal input, and then (224, 224, 3) signal image feature extraction is obtained, so that the length and the width of the picture are reduced to half of the original length and width. The method comprises the steps of extracting the maximum value of each region of a sample through a pooling layer to serve as a region representative to reduce calculated amount and parameter amount, enabling images output after pooling to pass through 33 blob lithography modules, continuously extracting features of different dimensions in the images, tiling the features through a tiling layer, finally collecting all the features into a full connection layer FC (3), and obtaining a cutter wear value corresponding to sensor data obtained at the current moment through linear summation of different weights in the features. In addition, the standard layer enables the output of the standard layer to be converged to be close to 0 in mean value and close to 1 in standard deviation through standardizing network layer signals, so that the network convergence speed is accelerated, the sensitivity of the network to the initial weight of the model is further reduced, and the DroupOut layer achieves the purpose of preventing the model from being over-fitted in the training process through randomly hiding part of neuron connection.
Step S902, divide the data set into a training set and a test set.
Processing all collected historical sensor data to convert the data into sample images, forming a data set by all the sample images, dividing the sample images into a training set and a testing set according to the proportion of 8:2, wherein the training set data is used for searching optimal parameters for a model, and the testing set data does not participate in the training of the model.
And step S903, the training set is used for training a tool wear value prediction model, and the loss function is reduced through continuous iteration.
Inputting training set data into a tool wear value prediction model, wherein the output of a specific layer is as follows:
Figure BDA0002902856870000161
x in formula (11)lDenotes the l-th layer output, WlRepresents the ith layer weight. f denotes the Relu activation function.
The Adam optimization algorithm is adopted by the model, and the Mean Squared Error (MSE) between the predicted cutter wear value and the actual cutter wear value output by the model is used as a loss function. The calculation formula is as follows:
Figure BDA0002902856870000162
in formula (12): y isi
Figure BDA0002902856870000163
Cutter respectively representing ith training sampleAnd the actual wear value and the tool wear predicted value are obtained, and n represents the total number of monitoring samples.
And (3) pushing the weight of each connection layer of the model network to adjust and update towards the direction of reducing the objective function by using a chain derivation method, wherein the updating method comprises the following steps:
Figure BDA0002902856870000164
in the formula (13), λ represents the learning rate range of the optimization algorithm used by the model, which is {0, 1}, and JmseRepresents the mean square error obtained by equation (12),
Figure BDA0002902856870000171
a new weight of the l-th layer is represented,
Figure BDA0002902856870000172
representing the weight of the layer i old.
Step S904, it is determined whether the iteration is completed.
If not, the process proceeds to step S905, and if so, the process proceeds to step S907. And setting the iteration times of the tool wear value prediction model training, judging whether the iteration is finished, if so, entering step 908, and if not, entering step 905.
And step S905, storing the current optimal parameters.
The optimal parameters generated in the training process are continuously stored in the whole training process and serve as final model parameters when the training is finished, sample images are selected according to batches for each training to update the model parameter weights, so that the model has high precision and generalization performance after the training is finished, and the monitored values are close to the true values through multiple training iterations.
Step S906, determines whether overfitting has occurred.
If so, the process proceeds to step S907, and if not, the process proceeds to step S903. Inputting a tool wear value prediction model obtained by current training by using a test set to obtain a first prediction result, inputting the tool wear value prediction model obtained by current training by using the training set to obtain a second prediction result, judging whether the difference value between the first prediction result and the second prediction result is greater than a set threshold value, if so, performing overfitting, and entering step S907.
In step S907, the model is readjusted, and the process proceeds to step S903.
When the cutter wear value prediction model is judged to be over-fitted, the network weight is not updated in the training, the training is still carried out by the network weight stored before, in addition, a threshold value can be set, when the number of times of over-fitting phenomenon is judged to be larger than the set threshold value, the training is stopped, and the historical optimal network weight is used as the final network weight of the cutter wear value prediction model.
And step S908, loading the optimal parameters to obtain a trained cutter wear value prediction model, and finishing training.
And updating and storing the parameters which are optimal in the training set and the test set in the training process at any time as the parameters among the network layers of the final cutter state monitoring model.
In order to better verify the advantages of the model, the decision coefficient (R2) and the average absolute percentage error MAPE are selected as model evaluation criteria, and the model evaluation result is verified from multiple angles so as to have universality. The decision coefficient R2 can represent the fitting degree between the model output wear loss monitoring value and the wear loss real value, and the closer R2 is to 1, the better the fitting degree between the monitoring value and the real value is. The calculation formula is as follows:
Figure BDA0002902856870000173
in formula (14)
Figure BDA0002902856870000174
Is the mean of all true values.
The MAPE can better reflect the actual situation of the error between the wear loss monitoring value and the actual wear loss value, and the calculation formula is as follows:
Figure BDA0002902856870000181
to enable comparison with similar tool wear monitoring studies, milling experimental data in open data of high speed numerical control machine tool health prediction competition provided by the american PHM association in 2010 was used. The main equipment used in the experiment and the relevant operating parameters are shown in tables 2 and 3.
TABLE 2 PHM data challenge experiment main equipment
Figure BDA0002902856870000182
TABLE 3 PHM data challenge experimental cutting parameters
Figure BDA0002902856870000183
Such tool wear data were also used for HMM, SVR, BPNN, FNN, one-dimensional Densnet based tool wear on-line monitoring models, MSE, MAPE and R2 were selected as evaluation criteria for the models in comparative experiments, and the results of comparison of seven models in the test set and training set are shown in table 4.
TABLE 4 comparative results
Figure BDA0002902856870000184
It should be noted that Hidden Markov Models (HMMs) are classic machine learning models, and are widely used in the fields of language recognition, natural language processing, pattern recognition, and the like.
Support Vector Regression (SVR) is an application of SVM to regression problem.
A Back Propagation Neural Network (BPNN) is the most basic Neural Network, and the output result is propagated in the forward direction, and the error is propagated in the reverse direction.
The Fuzzy Neural Network (FNN) is a product of combining Fuzzy theory and Neural Network, integrates the advantages of Neural Network and Fuzzy theory, and integrates learning, association, identification and information processing.
The core idea of the one-dimensional Dennet is to establish dense connection between the front layer and the back layer of the convolutional layer, that is, to directly connect all layers on the premise of ensuring maximum information transmission between the layers in the network, which not only alleviates the problem of gradient disappearance, but also is beneficial to extracting deeper features of audio signals, enhances the transmission between the features and improves the performance of the system.
Compared with the traditional machine learning network and the deep learning network, the method provided by the application has excellent performance on various standards, and embodies the strong network extraction capability and generalization performance. The method has the advantages that the proposed one-dimensional data sequence image coding processing enhances the characteristics of each signal, and compared with the original time domain signal, the hidden fine characteristics can be better extracted by a deep neural network. Since the test set used herein does not participate in the training of the model, the performance of the model in the test set demonstrates its excellent generalization performance.
Selecting a model GADF-CNN with optimal performance to monitor the wear loss of the cutter, wherein the result is shown in FIG. 13, and FIG. 13 is a diagram of the result of the on-line monitoring of the wear value of the cutter according to the preferred embodiment of the present application. From fig. 13, it can be seen that the prediction result output by the model is basically fit to the true value only when the local score deviates from the true value, and the precision meets the actual processing monitoring requirement. The acquired sensor data is converted into an image, a corresponding abrasion value is output through a model, the whole processing process is in a millisecond level, and the industrial on-line monitoring requirement is met.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
and S1, acquiring sensor data of the tool in real time to obtain a multi-dimensional data sequence, wherein the multi-dimensional data sequence comprises multi-dimensional data acquired by the sensor according to a time sequence.
And S2, performing wavelet transformation on the multi-dimensional data sequence to obtain an energy characteristic value of the multi-dimensional data sequence.
And S3, integrating the energy characteristic values of the multi-dimensional data sequence into a one-dimensional data sequence.
And S4, imaging the one-dimensional data sequence to obtain an intermediate image.
And S5, predicting the real-time wear value of the tool according to the intermediate image by using the tool wear value prediction model.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the tool wear value monitoring method in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the tool wear value monitoring methods of the above embodiments.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method of monitoring a wear value of a tool, comprising:
acquiring sensor data of a cutter in real time to obtain a multi-dimensional data sequence, wherein the multi-dimensional data sequence comprises multi-dimensional data acquired by a sensor according to a time sequence;
performing wavelet transformation on the multi-dimensional data sequence to obtain an energy characteristic value of the multi-dimensional data sequence;
integrating the energy characteristic values of the multi-dimensional data sequence into a one-dimensional data sequence;
imaging the one-dimensional data sequence to obtain an intermediate image;
predicting a real-time wear value of the tool according to the intermediate image by using a tool wear value prediction model;
the tool wear value prediction model comprises a convolution layer, a pooling layer, a flat layer and three full-connection layers, and the method further comprises the following steps:
acquiring all historical sensor data acquired by a sample cutter during each cutting in a whole life cycle and an actual cutter abrasion value during each cutting, and processing the historical sensor data to obtain a sample image;
constructing a cutter wear value prediction model based on a ResNet residual error network, inputting the sample image to the cutter wear value prediction model, and extracting the characteristics of the sample image through the convolution layer to obtain a characteristic matrix;
inputting the characteristic matrix into the pooling layer, reducing the characteristic matrix by the pooling layer to obtain a reduced characteristic matrix, and inputting the reduced characteristic matrix into the flat layer to obtain an unfolded one-dimensional characteristic;
inputting the one-dimensional characteristics into three full-connection layers, and performing linear summation on the one-dimensional characteristics by setting different weights in the three full-connection layers to obtain a predicted cutter wear value.
2. The tool wear value monitoring method of claim 1, further comprising:
acquiring all historical sensor data acquired by a sample cutter during each cutting in a whole life cycle and an actual cutter abrasion value during each cutting, and processing the historical sensor data to obtain a sample image;
constructing a cutter wear value prediction model based on a convolutional neural network, inputting the sample image to the cutter wear value prediction model, and obtaining a predicted cutter wear value;
and obtaining a loss value according to the predicted cutter wear value and the actual cutter wear value, training the cutter wear value prediction model according to the loss value, determining model parameters of the cutter wear value prediction model, and obtaining the trained cutter wear value prediction model.
3. The tool wear value monitoring method of claim 1, wherein the determining X, Y, Z axes direction according to the rectangular spatial coordinate system, the multidimensional data comprises X, Y, Z axes direction force signal, X, Y, Z axes direction acceleration signal and acoustic emission signal, the wavelet transforming the multidimensional data sequence to obtain the energy characteristic value of the multidimensional data sequence comprises:
splicing the force signals in the X, Y, Z axis direction into one-dimensional force signals, and splicing the acceleration signals in the X, Y, Z axis direction into one-dimensional acceleration signals;
and performing wavelet transformation on the one-dimensional force signal, the one-dimensional acceleration signal and the acoustic emission signal to obtain an energy characteristic value of each signal.
4. The tool wear value monitoring method of claim 1 wherein integrating the energy feature values of the multi-dimensional data series into a one-dimensional data series comprises:
and processing X, Y, Z the force signal in the axial direction, X, Y, Z the acceleration signal in the axial direction and the energy characteristic value of the acoustic emission signal to obtain the energy characteristic value of each signal after dimension reduction, and integrating the energy characteristic values of each signal after dimension reduction to obtain a one-dimensional data sequence.
5. The tool wear value monitoring method of claim 4, wherein processing X, Y, Z the energy characteristic values of the axial force signal, X, Y, Z the axial acceleration signal, and the acoustic emission signal to obtain the energy characteristic values of the signals after dimension reduction, and integrating the energy characteristic values of the signals after dimension reduction to obtain a one-dimensional data sequence comprises:
transposing the one-dimensional energy characteristic value of each signal to obtain an original P-dimensional characteristic vector of each signal;
dividing the original P-dimensional feature vector into N sections, calculating the average value of the feature vector of each section, and taking the average value of the feature vector of each section as output to obtain the N-dimensional feature vector of each signal, wherein P is greater than N, and N is greater than or equal to 1;
transposing the N-dimensional characteristic vectors of the signals to obtain one-dimensional characteristic vectors of the signals, and integrating the one-dimensional characteristic vectors of the signals to obtain a one-dimensional data sequence.
6. The tool wear value monitoring method according to claim 1, wherein the wavelet transforming the multi-dimensional data sequence to obtain the energy characteristic value of the multi-dimensional data sequence comprises:
and performing secondary sampling on the multi-dimensional data sequence and cutting the multi-dimensional data sequence into M sampling points, wherein M is more than or equal to 1.
7. The tool wear value monitoring method of claim 1, wherein the patterning the one-dimensional data sequence to obtain an intermediate image comprises:
zooming all data of the one-dimensional data sequence to a preset interval to obtain a zoomed one-dimensional data sequence, wherein the one-dimensional data sequence comprises collected multidimensional data and a timestamp corresponding to the multidimensional data;
mapping the zoomed one-dimensional data sequence into point data under polar coordinates, wherein the point data comprises all points mapped by the one-dimensional data sequence under the polar coordinates, and the time stamp and the multidimensional data are respectively converted into the radius and the angle of the point data under the polar coordinates;
and calculating the angle sum of one point of the point data and any other point in the point data in the polar coordinates and the angle difference of one point of the point data and any other point in the point data, and obtaining an N x N intermediate image by calculating the cosine value of the angle sum or the cosine value of the angle difference.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the tool wear value monitoring method of any one of claims 1 to 7.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the tool wear value monitoring method according to any one of claims 1 to 7 when running.
CN202110062495.0A 2021-01-18 2021-01-18 Tool wear value monitoring method, electronic device and storage medium Active CN112712063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110062495.0A CN112712063B (en) 2021-01-18 2021-01-18 Tool wear value monitoring method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110062495.0A CN112712063B (en) 2021-01-18 2021-01-18 Tool wear value monitoring method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112712063A CN112712063A (en) 2021-04-27
CN112712063B true CN112712063B (en) 2022-04-26

Family

ID=75549229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110062495.0A Active CN112712063B (en) 2021-01-18 2021-01-18 Tool wear value monitoring method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112712063B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313059B (en) * 2021-06-16 2022-10-04 燕山大学 One-dimensional spectrum classification method and system
CN113313198B (en) * 2021-06-21 2022-08-19 西北工业大学 Cutter wear prediction method based on multi-scale convolution neural network
CN113256613A (en) * 2021-06-21 2021-08-13 中国石油大学(北京) Pipeline damage data feature recognition model and method
CN113393056B (en) * 2021-07-08 2022-11-25 山东大学 Crowdsourcing service supply and demand gap prediction method and system based on time sequence
CN114714145B (en) * 2022-05-07 2023-05-12 嘉兴南湖学院 Graham angle field enhanced contrast learning monitoring method for cutter wear state
CN114742834B (en) * 2022-06-13 2022-09-13 中科航迈数控软件(深圳)有限公司 Method for judging abrasion of machining cutter of complex structural part
CN115712868B (en) * 2022-11-03 2023-05-16 赣州市光华有色金属有限公司 Detection device and detection method for tungsten lanthanum wire
CN117409306B (en) * 2023-10-31 2024-05-17 江苏南高智能装备创新中心有限公司 Fault monitoring method in milling cutter cutting-in process based on vibration and sound emission sensor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325112A (en) * 2020-01-31 2020-06-23 贵州大学 Cutter wear state monitoring method based on depth gate control circulation unit neural network
US10726059B1 (en) * 2016-11-10 2020-07-28 Snap Inc. Deep reinforcement learning-based captioning with embedding reward
CN111652307A (en) * 2020-05-29 2020-09-11 广西大学 Intelligent nondestructive identification method and device for redwood furniture based on convolutional neural network
CN111687689A (en) * 2020-06-23 2020-09-22 重庆大学 Cutter wear state prediction method and device based on LSTM and CNN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726059B1 (en) * 2016-11-10 2020-07-28 Snap Inc. Deep reinforcement learning-based captioning with embedding reward
CN111325112A (en) * 2020-01-31 2020-06-23 贵州大学 Cutter wear state monitoring method based on depth gate control circulation unit neural network
CN111652307A (en) * 2020-05-29 2020-09-11 广西大学 Intelligent nondestructive identification method and device for redwood furniture based on convolutional neural network
CN111687689A (en) * 2020-06-23 2020-09-22 重庆大学 Cutter wear state prediction method and device based on LSTM and CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Knowledge Model Based on Graphical Semantic Perception;Lyu Jian 等;《《CYBERNETICS AND INFORMATION TECHNOLOGIES》》;20150615;16-28页 *
基于深度门控循环单元神经网络的刀具磨损状态实时监测方法;陈启鹏 等;《《计算机集成制造系统》》;20200715;1782-1793 *

Also Published As

Publication number Publication date
CN112712063A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN112712063B (en) Tool wear value monitoring method, electronic device and storage medium
EP1038261B1 (en) Visualization and self-organization of multidimensional data through equalized orthogonal mapping
JP6742554B1 (en) Information processing apparatus and electronic apparatus including the same
CN110705525A (en) Method and device for diagnosing rolling bearing fault
CN110926782A (en) Circuit breaker fault type judgment method and device, electronic equipment and storage medium
CN108445752A (en) A kind of random weight Artificial neural network ensemble modeling method of adaptively selected depth characteristic
CN110647788B (en) Human daily behavior classification method based on micro-Doppler characteristics
CN109151727B (en) WLAN fingerprint positioning database construction method based on improved DBN
CN110705600A (en) Cross-correlation entropy based multi-depth learning model fusion method, terminal device and readable storage medium
CN114387627A (en) Small sample wireless device radio frequency fingerprint identification method and device based on depth measurement learning
CN116127298B (en) Small sample radio frequency fingerprint identification method based on triplet loss
CN114239810A (en) Milling cutter wear prediction method based on improved PCANet model
CN117221816A (en) Multi-building floor positioning method based on Wavelet-CNN
CN116796267A (en) EEG signal multi-classification method and system based on EEGNet and LSTM parallel network
CN115327504B (en) Sea clutter amplitude distribution non-typed prediction method based on measurement condition parameters
CN116311067A (en) Target comprehensive identification method, device and equipment based on high-dimensional characteristic map
CN113030849B (en) Near field source positioning method based on self-encoder and parallel network
Gantayat et al. An efficient direction‐of‐arrival estimation of multipath signals with impulsive noise using satin bowerbird optimization‐based deep learning neural network
Yang et al. Deep contrastive clustering for signal deinterleaving
CN112257807A (en) Dimension reduction method and system based on self-adaptive optimization linear neighborhood set selection
CN112839327B (en) Personnel validity detection method and device based on WiFi signals
CN118010848B (en) Intelligent anchorage device ponding detection method and system
CN118010103B (en) Intelligent monitoring method and system for equal-thickness cement soil stirring wall in severe cold environment
Kärkkäinen et al. An additive autoencoder for dimension estimation
CN111814953B (en) Positioning method of deep convolution neural network model based on channel pruning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant